2025-04-05 11:38:08.592654 | Job console starting... 2025-04-05 11:38:08.610359 | Updating repositories 2025-04-05 11:38:08.673090 | Preparing job workspace 2025-04-05 11:38:10.557649 | Running Ansible setup... 2025-04-05 11:38:15.560622 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-04-05 11:38:16.288041 | 2025-04-05 11:38:16.288247 | PLAY [Base pre] 2025-04-05 11:38:16.319404 | 2025-04-05 11:38:16.319527 | TASK [Setup log path fact] 2025-04-05 11:38:16.353091 | orchestrator | ok 2025-04-05 11:38:16.372434 | 2025-04-05 11:38:16.372559 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-04-05 11:38:16.413939 | orchestrator | ok 2025-04-05 11:38:16.429295 | 2025-04-05 11:38:16.429396 | TASK [emit-job-header : Print job information] 2025-04-05 11:38:16.481557 | # Job Information 2025-04-05 11:38:16.481716 | Ansible Version: 2.15.3 2025-04-05 11:38:16.481752 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-04-05 11:38:16.481784 | Pipeline: post 2025-04-05 11:38:16.481806 | Executor: 7d211f194f6a 2025-04-05 11:38:16.481825 | Triggered by: https://github.com/osism/testbed/commit/eb7c0fce0a10565765691bf26a932a74ae68525e 2025-04-05 11:38:16.481845 | Event ID: 6ea4a75c-1212-11f0-8a75-0002a8eee3c2 2025-04-05 11:38:16.489072 | 2025-04-05 11:38:16.489179 | LOOP [emit-job-header : Print node information] 2025-04-05 11:38:16.654511 | orchestrator | ok: 2025-04-05 11:38:16.654723 | orchestrator | # Node Information 2025-04-05 11:38:16.654758 | orchestrator | Inventory Hostname: orchestrator 2025-04-05 11:38:16.654784 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-04-05 11:38:16.654806 | orchestrator | Username: zuul-testbed06 2025-04-05 11:38:16.654827 | orchestrator | Distro: Debian 12.10 2025-04-05 11:38:16.654851 | orchestrator | Provider: static-testbed 2025-04-05 11:38:16.654872 | orchestrator | Label: testbed-orchestrator 2025-04-05 11:38:16.654893 | orchestrator | Product Name: OpenStack Nova 2025-04-05 11:38:16.654912 | orchestrator | Interface IP: 81.163.193.140 2025-04-05 11:38:16.685712 | 2025-04-05 11:38:16.685865 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-04-05 11:38:17.194076 | orchestrator -> localhost | changed 2025-04-05 11:38:17.203513 | 2025-04-05 11:38:17.203637 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-04-05 11:38:18.325754 | orchestrator -> localhost | changed 2025-04-05 11:38:18.347739 | 2025-04-05 11:38:18.347891 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-04-05 11:38:18.619662 | orchestrator -> localhost | ok 2025-04-05 11:38:18.634211 | 2025-04-05 11:38:18.634382 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-04-05 11:38:18.667724 | orchestrator | ok 2025-04-05 11:38:18.685441 | orchestrator | included: /var/lib/zuul/builds/acd7f2aa96a14e52945307d1493fa367/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-04-05 11:38:18.694391 | 2025-04-05 11:38:18.694490 | TASK [add-build-sshkey : Create Temp SSH key] 2025-04-05 11:38:19.531768 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-04-05 11:38:19.531974 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/acd7f2aa96a14e52945307d1493fa367/work/acd7f2aa96a14e52945307d1493fa367_id_rsa 2025-04-05 11:38:19.532011 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/acd7f2aa96a14e52945307d1493fa367/work/acd7f2aa96a14e52945307d1493fa367_id_rsa.pub 2025-04-05 11:38:19.532036 | orchestrator -> localhost | The key fingerprint is: 2025-04-05 11:38:19.532074 | orchestrator -> localhost | SHA256:WvNmlnJY7Sy/oIS3ac8aYz0bQ3dHsy8/l/zl8eDulPw zuul-build-sshkey 2025-04-05 11:38:19.532098 | orchestrator -> localhost | The key's randomart image is: 2025-04-05 11:38:19.532119 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-04-05 11:38:19.532139 | orchestrator -> localhost | | | 2025-04-05 11:38:19.532159 | orchestrator -> localhost | | | 2025-04-05 11:38:19.532189 | orchestrator -> localhost | | ..| 2025-04-05 11:38:19.532209 | orchestrator -> localhost | | . .o| 2025-04-05 11:38:19.532229 | orchestrator -> localhost | | S o o ...| 2025-04-05 11:38:19.532248 | orchestrator -> localhost | | + B = o o.| 2025-04-05 11:38:19.532275 | orchestrator -> localhost | | o O # o B.+| 2025-04-05 11:38:19.532295 | orchestrator -> localhost | | +.& O o X=| 2025-04-05 11:38:19.532315 | orchestrator -> localhost | | .=o+ o++ E| 2025-04-05 11:38:19.532334 | orchestrator -> localhost | +----[SHA256]-----+ 2025-04-05 11:38:19.532408 | orchestrator -> localhost | ok: Runtime: 0:00:00.317432 2025-04-05 11:38:19.541636 | 2025-04-05 11:38:19.541744 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-04-05 11:38:19.573764 | orchestrator | ok 2025-04-05 11:38:19.585922 | orchestrator | included: /var/lib/zuul/builds/acd7f2aa96a14e52945307d1493fa367/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-04-05 11:38:19.596715 | 2025-04-05 11:38:19.596815 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-04-05 11:38:19.631397 | orchestrator | skipping: Conditional result was False 2025-04-05 11:38:19.643267 | 2025-04-05 11:38:19.643395 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-04-05 11:38:20.218383 | orchestrator | changed 2025-04-05 11:38:20.244358 | 2025-04-05 11:38:20.244672 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-04-05 11:38:20.525728 | orchestrator | ok 2025-04-05 11:38:20.539577 | 2025-04-05 11:38:20.539710 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-04-05 11:38:20.940303 | orchestrator | ok 2025-04-05 11:38:20.950679 | 2025-04-05 11:38:20.950807 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-04-05 11:38:21.342429 | orchestrator | ok 2025-04-05 11:38:21.350777 | 2025-04-05 11:38:21.350884 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-04-05 11:38:21.375187 | orchestrator | skipping: Conditional result was False 2025-04-05 11:38:21.384175 | 2025-04-05 11:38:21.384283 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-04-05 11:38:21.822993 | orchestrator -> localhost | changed 2025-04-05 11:38:21.838355 | 2025-04-05 11:38:21.838471 | TASK [add-build-sshkey : Add back temp key] 2025-04-05 11:38:22.180376 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/acd7f2aa96a14e52945307d1493fa367/work/acd7f2aa96a14e52945307d1493fa367_id_rsa (zuul-build-sshkey) 2025-04-05 11:38:22.180747 | orchestrator -> localhost | ok: Runtime: 0:00:00.022877 2025-04-05 11:38:22.191035 | 2025-04-05 11:38:22.191181 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-04-05 11:38:22.582107 | orchestrator | ok 2025-04-05 11:38:22.598266 | 2025-04-05 11:38:22.598384 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-04-05 11:38:22.633415 | orchestrator | skipping: Conditional result was False 2025-04-05 11:38:22.648827 | 2025-04-05 11:38:22.648930 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-04-05 11:38:23.089704 | orchestrator | ok 2025-04-05 11:38:23.108728 | 2025-04-05 11:38:23.108852 | TASK [validate-host : Define zuul_info_dir fact] 2025-04-05 11:38:23.156338 | orchestrator | ok 2025-04-05 11:38:23.165130 | 2025-04-05 11:38:23.165236 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-04-05 11:38:23.488368 | orchestrator -> localhost | ok 2025-04-05 11:38:23.507375 | 2025-04-05 11:38:23.507540 | TASK [validate-host : Collect information about the host] 2025-04-05 11:38:24.606815 | orchestrator | ok 2025-04-05 11:38:24.622175 | 2025-04-05 11:38:24.622287 | TASK [validate-host : Sanitize hostname] 2025-04-05 11:38:24.701389 | orchestrator | ok 2025-04-05 11:38:24.710990 | 2025-04-05 11:38:24.711160 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-04-05 11:38:25.265565 | orchestrator -> localhost | changed 2025-04-05 11:38:25.288999 | 2025-04-05 11:38:25.289292 | TASK [validate-host : Collect information about zuul worker] 2025-04-05 11:38:25.811817 | orchestrator | ok 2025-04-05 11:38:25.821889 | 2025-04-05 11:38:25.822016 | TASK [validate-host : Write out all zuul information for each host] 2025-04-05 11:38:26.377939 | orchestrator -> localhost | changed 2025-04-05 11:38:26.402352 | 2025-04-05 11:38:26.402483 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-04-05 11:38:26.688864 | orchestrator | ok 2025-04-05 11:38:26.713852 | 2025-04-05 11:38:26.713971 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-04-05 11:39:09.863705 | orchestrator | changed: 2025-04-05 11:39:09.863994 | orchestrator | .d..t...... src/ 2025-04-05 11:39:09.864053 | orchestrator | .d..t...... src/github.com/ 2025-04-05 11:39:09.864115 | orchestrator | .d..t...... src/github.com/osism/ 2025-04-05 11:39:09.864153 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-04-05 11:39:09.864188 | orchestrator | RedHat.yml 2025-04-05 11:39:09.881841 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-04-05 11:39:09.881859 | orchestrator | RedHat.yml 2025-04-05 11:39:09.881912 | orchestrator | = 1.53.0"... 2025-04-05 11:39:21.569477 | orchestrator | 11:39:21.569 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-04-05 11:39:22.830790 | orchestrator | 11:39:22.830 STDOUT terraform: - Installing hashicorp/null v3.2.3... 2025-04-05 11:39:23.772241 | orchestrator | 11:39:23.772 STDOUT terraform: - Installed hashicorp/null v3.2.3 (signed, key ID 0C0AF313E5FD9F80) 2025-04-05 11:39:24.640104 | orchestrator | 11:39:24.639 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-04-05 11:39:25.861616 | orchestrator | 11:39:25.861 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-04-05 11:39:26.773440 | orchestrator | 11:39:26.773 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-04-05 11:39:27.607587 | orchestrator | 11:39:27.607 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-04-05 11:39:27.607652 | orchestrator | 11:39:27.607 STDOUT terraform: Providers are signed by their developers. 2025-04-05 11:39:27.607664 | orchestrator | 11:39:27.607 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-04-05 11:39:27.607674 | orchestrator | 11:39:27.607 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-04-05 11:39:27.607682 | orchestrator | 11:39:27.607 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-04-05 11:39:27.607727 | orchestrator | 11:39:27.607 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-04-05 11:39:27.607777 | orchestrator | 11:39:27.607 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-04-05 11:39:27.607798 | orchestrator | 11:39:27.607 STDOUT terraform: you run "tofu init" in the future. 2025-04-05 11:39:27.608169 | orchestrator | 11:39:27.608 STDOUT terraform: OpenTofu has been successfully initialized! 2025-04-05 11:39:27.608205 | orchestrator | 11:39:27.608 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-04-05 11:39:27.608215 | orchestrator | 11:39:27.608 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-04-05 11:39:27.608290 | orchestrator | 11:39:27.608 STDOUT terraform: should now work. 2025-04-05 11:39:27.608306 | orchestrator | 11:39:27.608 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-04-05 11:39:27.608343 | orchestrator | 11:39:27.608 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-04-05 11:39:27.608389 | orchestrator | 11:39:27.608 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-04-05 11:39:27.760275 | orchestrator | 11:39:27.760 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-04-05 11:39:27.933961 | orchestrator | 11:39:27.932 STDOUT terraform: Created and switched to workspace "ci"! 2025-04-05 11:39:28.159699 | orchestrator | 11:39:27.932 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-04-05 11:39:28.159824 | orchestrator | 11:39:27.932 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-04-05 11:39:28.159844 | orchestrator | 11:39:27.933 STDOUT terraform: for this configuration. 2025-04-05 11:39:28.159889 | orchestrator | 11:39:28.159 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-04-05 11:39:28.269921 | orchestrator | 11:39:28.269 STDOUT terraform: ci.auto.tfvars 2025-04-05 11:39:28.447022 | orchestrator | 11:39:28.446 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-04-05 11:39:29.262675 | orchestrator | 11:39:29.262 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-04-05 11:39:29.802448 | orchestrator | 11:39:29.801 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-04-05 11:39:29.989681 | orchestrator | 11:39:29.987 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-04-05 11:39:29.989765 | orchestrator | 11:39:29.987 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-04-05 11:39:29.989773 | orchestrator | 11:39:29.987 STDOUT terraform:  + create 2025-04-05 11:39:29.989785 | orchestrator | 11:39:29.987 STDOUT terraform:  <= read (data resources) 2025-04-05 11:39:29.989791 | orchestrator | 11:39:29.987 STDOUT terraform: OpenTofu will perform the following actions: 2025-04-05 11:39:29.989797 | orchestrator | 11:39:29.987 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-04-05 11:39:29.989802 | orchestrator | 11:39:29.988 STDOUT terraform:  # (config refers to values not yet known) 2025-04-05 11:39:29.989807 | orchestrator | 11:39:29.988 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-04-05 11:39:29.989812 | orchestrator | 11:39:29.988 STDOUT terraform:  + checksum = (known after apply) 2025-04-05 11:39:29.989818 | orchestrator | 11:39:29.988 STDOUT terraform:  + created_at = (known after apply) 2025-04-05 11:39:29.989822 | orchestrator | 11:39:29.988 STDOUT terraform:  + file = (known after apply) 2025-04-05 11:39:29.989828 | orchestrator | 11:39:29.988 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:29.989832 | orchestrator | 11:39:29.988 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:29.989837 | orchestrator | 11:39:29.988 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-04-05 11:39:29.989842 | orchestrator | 11:39:29.988 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-04-05 11:39:29.989850 | orchestrator | 11:39:29.988 STDOUT terraform:  + most_recent = true 2025-04-05 11:39:29.989855 | orchestrator | 11:39:29.988 STDOUT terraform:  + name = (known after apply) 2025-04-05 11:39:29.989874 | orchestrator | 11:39:29.988 STDOUT terraform:  + protected = (known after apply) 2025-04-05 11:39:29.989879 | orchestrator | 11:39:29.988 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:29.989884 | orchestrator | 11:39:29.988 STDOUT terraform:  + schema = (known after apply) 2025-04-05 11:39:29.989889 | orchestrator | 11:39:29.988 STDOUT terraform:  + size_bytes = (known after apply) 2025-04-05 11:39:29.989895 | orchestrator | 11:39:29.988 STDOUT terraform:  + tags = (known after apply) 2025-04-05 11:39:29.989905 | orchestrator | 11:39:29.988 STDOUT terraform:  + updated_at = (known after apply) 2025-04-05 11:39:29.989911 | orchestrator | 11:39:29.989 STDOUT terraform:  } 2025-04-05 11:39:29.989918 | orchestrator | 11:39:29.989 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-04-05 11:39:29.989923 | orchestrator | 11:39:29.989 STDOUT terraform:  # (config refers to values not yet known) 2025-04-05 11:39:29.989928 | orchestrator | 11:39:29.989 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-04-05 11:39:29.989936 | orchestrator | 11:39:29.989 STDOUT terraform:  + checksum = (known after apply) 2025-04-05 11:39:29.989941 | orchestrator | 11:39:29.989 STDOUT terraform:  + created_at = (known after apply) 2025-04-05 11:39:29.989945 | orchestrator | 11:39:29.989 STDOUT terraform:  + file = (known after apply) 2025-04-05 11:39:29.989950 | orchestrator | 11:39:29.989 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:29.989955 | orchestrator | 11:39:29.989 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:29.989960 | orchestrator | 11:39:29.989 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-04-05 11:39:29.989965 | orchestrator | 11:39:29.989 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-04-05 11:39:29.989972 | orchestrator | 11:39:29.989 STDOUT terraform:  + most_recent = true 2025-04-05 11:39:29.989991 | orchestrator | 11:39:29.989 STDOUT terraform:  + name = (known after apply) 2025-04-05 11:39:29.989997 | orchestrator | 11:39:29.989 STDOUT terraform:  + protected = (known after apply) 2025-04-05 11:39:29.990002 | orchestrator | 11:39:29.989 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:29.990006 | orchestrator | 11:39:29.989 STDOUT terraform:  + schema = (known after apply) 2025-04-05 11:39:29.990033 | orchestrator | 11:39:29.989 STDOUT terraform:  + size_bytes = (known after apply) 2025-04-05 11:39:29.990042 | orchestrator | 11:39:29.989 STDOUT terraform:  + tags = (known after apply) 2025-04-05 11:39:29.990078 | orchestrator | 11:39:29.989 STDOUT terraform:  + updated_at = (known after apply) 2025-04-05 11:39:29.990127 | orchestrator | 11:39:29.990 STDOUT terraform:  } 2025-04-05 11:39:29.990178 | orchestrator | 11:39:29.990 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-04-05 11:39:29.990238 | orchestrator | 11:39:29.990 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-04-05 11:39:29.990313 | orchestrator | 11:39:29.990 STDOUT terraform:  + content = (known after apply) 2025-04-05 11:39:29.990385 | orchestrator | 11:39:29.990 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-05 11:39:29.990452 | orchestrator | 11:39:29.990 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-05 11:39:29.990523 | orchestrator | 11:39:29.990 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-05 11:39:29.990594 | orchestrator | 11:39:29.990 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-05 11:39:29.990665 | orchestrator | 11:39:29.990 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-05 11:39:29.990739 | orchestrator | 11:39:29.990 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-05 11:39:29.990792 | orchestrator | 11:39:29.990 STDOUT terraform:  + directory_permission = "0777" 2025-04-05 11:39:29.990840 | orchestrator | 11:39:29.990 STDOUT terraform:  + file_permission = "0644" 2025-04-05 11:39:29.990914 | orchestrator | 11:39:29.990 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-04-05 11:39:29.990986 | orchestrator | 11:39:29.990 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:29.991012 | orchestrator | 11:39:29.990 STDOUT terraform:  } 2025-04-05 11:39:29.991067 | orchestrator | 11:39:29.991 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-04-05 11:39:29.991121 | orchestrator | 11:39:29.991 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-04-05 11:39:29.991192 | orchestrator | 11:39:29.991 STDOUT terraform:  + content = (known after apply) 2025-04-05 11:39:29.991275 | orchestrator | 11:39:29.991 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-05 11:39:29.991343 | orchestrator | 11:39:29.991 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-05 11:39:29.991418 | orchestrator | 11:39:29.991 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-05 11:39:29.991488 | orchestrator | 11:39:29.991 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-05 11:39:29.991561 | orchestrator | 11:39:29.991 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-05 11:39:29.991646 | orchestrator | 11:39:29.991 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-05 11:39:29.991683 | orchestrator | 11:39:29.991 STDOUT terraform:  + directory_permission = "0777" 2025-04-05 11:39:29.991731 | orchestrator | 11:39:29.991 STDOUT terraform:  + file_permission = "0644" 2025-04-05 11:39:29.991795 | orchestrator | 11:39:29.991 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-04-05 11:39:29.991876 | orchestrator | 11:39:29.991 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:29.991884 | orchestrator | 11:39:29.991 STDOUT terraform:  } 2025-04-05 11:39:29.991999 | orchestrator | 11:39:29.991 STDOUT terraform:  # local_file.inventory will be created 2025-04-05 11:39:29.992043 | orchestrator | 11:39:29.991 STDOUT terraform:  + resource "local_file" "inventory" { 2025-04-05 11:39:29.992115 | orchestrator | 11:39:29.992 STDOUT terraform:  + content = (known after apply) 2025-04-05 11:39:29.992186 | orchestrator | 11:39:29.992 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-05 11:39:29.992271 | orchestrator | 11:39:29.992 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-05 11:39:29.992345 | orchestrator | 11:39:29.992 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-05 11:39:29.992415 | orchestrator | 11:39:29.992 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-05 11:39:29.992485 | orchestrator | 11:39:29.992 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-05 11:39:29.992556 | orchestrator | 11:39:29.992 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-05 11:39:29.992603 | orchestrator | 11:39:29.992 STDOUT terraform:  + directory_permission = "0777" 2025-04-05 11:39:29.992651 | orchestrator | 11:39:29.992 STDOUT terraform:  + file_permission = "0644" 2025-04-05 11:39:29.992715 | orchestrator | 11:39:29.992 STDOUT terraform:  + filename = "inventory.ci" 2025-04-05 11:39:29.992786 | orchestrator | 11:39:29.992 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:29.992815 | orchestrator | 11:39:29.992 STDOUT terraform:  } 2025-04-05 11:39:29.992876 | orchestrator | 11:39:29.992 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-04-05 11:39:29.992935 | orchestrator | 11:39:29.992 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-04-05 11:39:29.992998 | orchestrator | 11:39:29.992 STDOUT terraform:  + content = (sensitive value) 2025-04-05 11:39:29.993067 | orchestrator | 11:39:29.992 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-05 11:39:29.993146 | orchestrator | 11:39:29.993 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-05 11:39:29.993212 | orchestrator | 11:39:29.993 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-05 11:39:29.993318 | orchestrator | 11:39:29.993 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-05 11:39:29.993385 | orchestrator | 11:39:29.993 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-05 11:39:29.993457 | orchestrator | 11:39:29.993 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-05 11:39:29.993505 | orchestrator | 11:39:29.993 STDOUT terraform:  + directory_permission = "0700" 2025-04-05 11:39:29.993553 | orchestrator | 11:39:29.993 STDOUT terraform:  + file_permission = "0600" 2025-04-05 11:39:29.993616 | orchestrator | 11:39:29.993 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-04-05 11:39:29.993689 | orchestrator | 11:39:29.993 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:29.993718 | orchestrator | 11:39:29.993 STDOUT terraform:  } 2025-04-05 11:39:29.993779 | orchestrator | 11:39:29.993 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-04-05 11:39:29.993840 | orchestrator | 11:39:29.993 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-04-05 11:39:29.993883 | orchestrator | 11:39:29.993 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:29.993911 | orchestrator | 11:39:29.993 STDOUT terraform:  } 2025-04-05 11:39:29.994035 | orchestrator | 11:39:29.993 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-04-05 11:39:29.994123 | orchestrator | 11:39:29.994 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-04-05 11:39:29.994187 | orchestrator | 11:39:29.994 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:29.994240 | orchestrator | 11:39:29.994 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:29.994303 | orchestrator | 11:39:29.994 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:29.994366 | orchestrator | 11:39:29.994 STDOUT terraform:  + image_id = (known after apply) 2025-04-05 11:39:29.994427 | orchestrator | 11:39:29.994 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:29.994507 | orchestrator | 11:39:29.994 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-04-05 11:39:29.994569 | orchestrator | 11:39:29.994 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:29.994611 | orchestrator | 11:39:29.994 STDOUT terraform:  + size = 80 2025-04-05 11:39:29.994653 | orchestrator | 11:39:29.994 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:29.994681 | orchestrator | 11:39:29.994 STDOUT terraform:  } 2025-04-05 11:39:29.994775 | orchestrator | 11:39:29.994 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-04-05 11:39:29.994867 | orchestrator | 11:39:29.994 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-05 11:39:29.994930 | orchestrator | 11:39:29.994 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:29.994975 | orchestrator | 11:39:29.994 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:29.995038 | orchestrator | 11:39:29.994 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:29.995099 | orchestrator | 11:39:29.995 STDOUT terraform:  + image_id = (known after apply) 2025-04-05 11:39:29.995161 | orchestrator | 11:39:29.995 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:29.995428 | orchestrator | 11:39:29.995 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-04-05 11:39:29.995529 | orchestrator | 11:39:29.995 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:29.995549 | orchestrator | 11:39:29.995 STDOUT terraform:  + size = 80 2025-04-05 11:39:29.995565 | orchestrator | 11:39:29.995 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:29.995580 | orchestrator | 11:39:29.995 STDOUT terraform:  } 2025-04-05 11:39:29.995600 | orchestrator | 11:39:29.995 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-04-05 11:39:29.995643 | orchestrator | 11:39:29.995 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-05 11:39:29.995662 | orchestrator | 11:39:29.995 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:29.995680 | orchestrator | 11:39:29.995 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:29.995755 | orchestrator | 11:39:29.995 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:29.995803 | orchestrator | 11:39:29.995 STDOUT terraform:  + image_id = (known after apply) 2025-04-05 11:39:29.995868 | orchestrator | 11:39:29.995 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:29.995943 | orchestrator | 11:39:29.995 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-04-05 11:39:29.996017 | orchestrator | 11:39:29.995 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:29.996059 | orchestrator | 11:39:29.995 STDOUT terraform:  + size = 80 2025-04-05 11:39:29.996075 | orchestrator | 11:39:29.996 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:29.996094 | orchestrator | 11:39:29.996 STDOUT terraform:  } 2025-04-05 11:39:29.996194 | orchestrator | 11:39:29.996 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-04-05 11:39:29.996321 | orchestrator | 11:39:29.996 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-05 11:39:29.996383 | orchestrator | 11:39:29.996 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:29.996437 | orchestrator | 11:39:29.996 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:29.996490 | orchestrator | 11:39:29.996 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:29.996552 | orchestrator | 11:39:29.996 STDOUT terraform:  + image_id = (known after apply) 2025-04-05 11:39:29.996603 | orchestrator | 11:39:29.996 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:29.996670 | orchestrator | 11:39:29.996 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-04-05 11:39:29.996723 | orchestrator | 11:39:29.996 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:29.996742 | orchestrator | 11:39:29.996 STDOUT terraform:  + size = 80 2025-04-05 11:39:29.996784 | orchestrator | 11:39:29.996 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:29.996875 | orchestrator | 11:39:29.996 STDOUT terraform:  } 2025-04-05 11:39:29.996894 | orchestrator | 11:39:29.996 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-04-05 11:39:29.996962 | orchestrator | 11:39:29.996 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-05 11:39:29.997004 | orchestrator | 11:39:29.996 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:29.997023 | orchestrator | 11:39:29.996 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:29.997092 | orchestrator | 11:39:29.997 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:29.997134 | orchestrator | 11:39:29.997 STDOUT terraform:  + image_id = (known after apply) 2025-04-05 11:39:29.997190 | orchestrator | 11:39:29.997 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:29.997275 | orchestrator | 11:39:29.997 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-04-05 11:39:29.997321 | orchestrator | 11:39:29.997 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:29.997340 | orchestrator | 11:39:29.997 STDOUT terraform:  + size = 80 2025-04-05 11:39:29.997381 | orchestrator | 11:39:29.997 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:29.997476 | orchestrator | 11:39:29.997 STDOUT terraform:  } 2025-04-05 11:39:29.997496 | orchestrator | 11:39:29.997 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-04-05 11:39:29.997556 | orchestrator | 11:39:29.997 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-05 11:39:29.997607 | orchestrator | 11:39:29.997 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:29.997635 | orchestrator | 11:39:29.997 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:29.997691 | orchestrator | 11:39:29.997 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:29.997756 | orchestrator | 11:39:29.997 STDOUT terraform:  + image_id = (known after apply) 2025-04-05 11:39:29.997810 | orchestrator | 11:39:29.997 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:29.997875 | orchestrator | 11:39:29.997 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-04-05 11:39:29.997929 | orchestrator | 11:39:29.997 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:29.997947 | orchestrator | 11:39:29.997 STDOUT terraform:  + size = 80 2025-04-05 11:39:29.997990 | orchestrator | 11:39:29.997 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:29.998096 | orchestrator | 11:39:29.997 STDOUT terraform:  } 2025-04-05 11:39:29.998118 | orchestrator | 11:39:29.997 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-04-05 11:39:29.998179 | orchestrator | 11:39:29.998 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-05 11:39:29.998258 | orchestrator | 11:39:29.998 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:29.998326 | orchestrator | 11:39:29.998 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:29.998345 | orchestrator | 11:39:29.998 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:29.998362 | orchestrator | 11:39:29.998 STDOUT terraform:  + image_id = (known after apply) 2025-04-05 11:39:29.998431 | orchestrator | 11:39:29.998 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:29.998497 | orchestrator | 11:39:29.998 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-04-05 11:39:29.998553 | orchestrator | 11:39:29.998 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:29.998571 | orchestrator | 11:39:29.998 STDOUT terraform:  + size = 80 2025-04-05 11:39:29.998613 | orchestrator | 11:39:29.998 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:29.998700 | orchestrator | 11:39:29.998 STDOUT terraform:  } 2025-04-05 11:39:29.998719 | orchestrator | 11:39:29.998 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-04-05 11:39:29.998779 | orchestrator | 11:39:29.998 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-05 11:39:29.998832 | orchestrator | 11:39:29.998 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:29.998851 | orchestrator | 11:39:29.998 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:29.998919 | orchestrator | 11:39:29.998 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:29.998962 | orchestrator | 11:39:29.998 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:29.999027 | orchestrator | 11:39:29.998 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-04-05 11:39:29.999080 | orchestrator | 11:39:29.999 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:29.999098 | orchestrator | 11:39:29.999 STDOUT terraform:  + size = 20 2025-04-05 11:39:29.999125 | orchestrator | 11:39:29.999 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:29.999143 | orchestrator | 11:39:29.999 STDOUT terraform:  } 2025-04-05 11:39:29.999260 | orchestrator | 11:39:29.999 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-04-05 11:39:29.999399 | orchestrator | 11:39:29.999 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-05 11:39:29.999426 | orchestrator | 11:39:29.999 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:29.999433 | orchestrator | 11:39:29.999 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:29.999440 | orchestrator | 11:39:29.999 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:29.999473 | orchestrator | 11:39:29.999 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:29.999529 | orchestrator | 11:39:29.999 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-04-05 11:39:29.999574 | orchestrator | 11:39:29.999 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:29.999607 | orchestrator | 11:39:29.999 STDOUT terraform:  + size = 20 2025-04-05 11:39:29.999640 | orchestrator | 11:39:29.999 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:29.999658 | orchestrator | 11:39:29.999 STDOUT terraform:  } 2025-04-05 11:39:29.999723 | orchestrator | 11:39:29.999 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-04-05 11:39:29.999786 | orchestrator | 11:39:29.999 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-05 11:39:29.999833 | orchestrator | 11:39:29.999 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:29.999863 | orchestrator | 11:39:29.999 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:29.999915 | orchestrator | 11:39:29.999 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:29.999960 | orchestrator | 11:39:29.999 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:30.000017 | orchestrator | 11:39:29.999 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-04-05 11:39:30.000062 | orchestrator | 11:39:30.000 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.000094 | orchestrator | 11:39:30.000 STDOUT terraform:  + size = 20 2025-04-05 11:39:30.000124 | orchestrator | 11:39:30.000 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:30.000142 | orchestrator | 11:39:30.000 STDOUT terraform:  } 2025-04-05 11:39:30.000207 | orchestrator | 11:39:30.000 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-04-05 11:39:30.000287 | orchestrator | 11:39:30.000 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-05 11:39:30.000331 | orchestrator | 11:39:30.000 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:30.000359 | orchestrator | 11:39:30.000 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.000400 | orchestrator | 11:39:30.000 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.000445 | orchestrator | 11:39:30.000 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:30.000502 | orchestrator | 11:39:30.000 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-04-05 11:39:30.000548 | orchestrator | 11:39:30.000 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.000579 | orchestrator | 11:39:30.000 STDOUT terraform:  + size = 20 2025-04-05 11:39:30.000612 | orchestrator | 11:39:30.000 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:30.000635 | orchestrator | 11:39:30.000 STDOUT terraform:  } 2025-04-05 11:39:30.000699 | orchestrator | 11:39:30.000 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-04-05 11:39:30.000762 | orchestrator | 11:39:30.000 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-05 11:39:30.000812 | orchestrator | 11:39:30.000 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:30.000839 | orchestrator | 11:39:30.000 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.000885 | orchestrator | 11:39:30.000 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.000931 | orchestrator | 11:39:30.000 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:30.000992 | orchestrator | 11:39:30.000 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-04-05 11:39:30.001039 | orchestrator | 11:39:30.000 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.001065 | orchestrator | 11:39:30.001 STDOUT terraform:  + size = 20 2025-04-05 11:39:30.001091 | orchestrator | 11:39:30.001 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:30.001100 | orchestrator | 11:39:30.001 STDOUT terraform:  } 2025-04-05 11:39:30.001172 | orchestrator | 11:39:30.001 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-04-05 11:39:30.001248 | orchestrator | 11:39:30.001 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-05 11:39:30.001294 | orchestrator | 11:39:30.001 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:30.001325 | orchestrator | 11:39:30.001 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.001370 | orchestrator | 11:39:30.001 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.001420 | orchestrator | 11:39:30.001 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:30.001470 | orchestrator | 11:39:30.001 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-04-05 11:39:30.001513 | orchestrator | 11:39:30.001 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.001545 | orchestrator | 11:39:30.001 STDOUT terraform:  + size = 20 2025-04-05 11:39:30.001576 | orchestrator | 11:39:30.001 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:30.001595 | orchestrator | 11:39:30.001 STDOUT terraform:  } 2025-04-05 11:39:30.001659 | orchestrator | 11:39:30.001 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-04-05 11:39:30.001723 | orchestrator | 11:39:30.001 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-05 11:39:30.001769 | orchestrator | 11:39:30.001 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:30.001795 | orchestrator | 11:39:30.001 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.001842 | orchestrator | 11:39:30.001 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.001889 | orchestrator | 11:39:30.001 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:30.001944 | orchestrator | 11:39:30.001 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-04-05 11:39:30.001991 | orchestrator | 11:39:30.001 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.002028 | orchestrator | 11:39:30.001 STDOUT terraform:  + size = 20 2025-04-05 11:39:30.002070 | orchestrator | 11:39:30.002 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:30.002078 | orchestrator | 11:39:30.002 STDOUT terraform:  } 2025-04-05 11:39:30.002150 | orchestrator | 11:39:30.002 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-04-05 11:39:30.002213 | orchestrator | 11:39:30.002 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-05 11:39:30.002384 | orchestrator | 11:39:30.002 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:30.002428 | orchestrator | 11:39:30.002 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.002435 | orchestrator | 11:39:30.002 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.002443 | orchestrator | 11:39:30.002 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:30.002479 | orchestrator | 11:39:30.002 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-04-05 11:39:30.002488 | orchestrator | 11:39:30.002 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.002506 | orchestrator | 11:39:30.002 STDOUT terraform:  + size = 20 2025-04-05 11:39:30.002536 | orchestrator | 11:39:30.002 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:30.002555 | orchestrator | 11:39:30.002 STDOUT terraform:  } 2025-04-05 11:39:30.002619 | orchestrator | 11:39:30.002 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-04-05 11:39:30.002683 | orchestrator | 11:39:30.002 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-05 11:39:30.002729 | orchestrator | 11:39:30.002 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:30.002754 | orchestrator | 11:39:30.002 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.002801 | orchestrator | 11:39:30.002 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.002846 | orchestrator | 11:39:30.002 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:30.002901 | orchestrator | 11:39:30.002 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-04-05 11:39:30.002947 | orchestrator | 11:39:30.002 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.002978 | orchestrator | 11:39:30.002 STDOUT terraform:  + size = 20 2025-04-05 11:39:30.003009 | orchestrator | 11:39:30.002 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:30.003033 | orchestrator | 11:39:30.003 STDOUT terraform:  } 2025-04-05 11:39:30.003096 | orchestrator | 11:39:30.003 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-04-05 11:39:30.003160 | orchestrator | 11:39:30.003 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-05 11:39:30.003205 | orchestrator | 11:39:30.003 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:30.003264 | orchestrator | 11:39:30.003 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.003313 | orchestrator | 11:39:30.003 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.003359 | orchestrator | 11:39:30.003 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:30.003416 | orchestrator | 11:39:30.003 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-04-05 11:39:30.003461 | orchestrator | 11:39:30.003 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.003494 | orchestrator | 11:39:30.003 STDOUT terraform:  + size = 20 2025-04-05 11:39:30.003525 | orchestrator | 11:39:30.003 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:30.003542 | orchestrator | 11:39:30.003 STDOUT terraform:  } 2025-04-05 11:39:30.003606 | orchestrator | 11:39:30.003 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-04-05 11:39:30.003663 | orchestrator | 11:39:30.003 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-05 11:39:30.003704 | orchestrator | 11:39:30.003 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:30.003732 | orchestrator | 11:39:30.003 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.003773 | orchestrator | 11:39:30.003 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.003817 | orchestrator | 11:39:30.003 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:30.003865 | orchestrator | 11:39:30.003 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-04-05 11:39:30.003907 | orchestrator | 11:39:30.003 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.003934 | orchestrator | 11:39:30.003 STDOUT terraform:  + size = 20 2025-04-05 11:39:30.003962 | orchestrator | 11:39:30.003 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:30.003970 | orchestrator | 11:39:30.003 STDOUT terraform:  } 2025-04-05 11:39:30.004034 | orchestrator | 11:39:30.003 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-04-05 11:39:30.004091 | orchestrator | 11:39:30.004 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-05 11:39:30.004132 | orchestrator | 11:39:30.004 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:30.004160 | orchestrator | 11:39:30.004 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.004201 | orchestrator | 11:39:30.004 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.004460 | orchestrator | 11:39:30.004 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:30.004543 | orchestrator | 11:39:30.004 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-04-05 11:39:30.004584 | orchestrator | 11:39:30.004 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.004599 | orchestrator | 11:39:30.004 STDOUT terraform:  + size = 20 2025-04-05 11:39:30.004615 | orchestrator | 11:39:30.004 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:30.004629 | orchestrator | 11:39:30.004 STDOUT terraform:  } 2025-04-05 11:39:30.004651 | orchestrator | 11:39:30.004 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-04-05 11:39:30.004707 | orchestrator | 11:39:30.004 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-05 11:39:30.004723 | orchestrator | 11:39:30.004 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:30.004738 | orchestrator | 11:39:30.004 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.004752 | orchestrator | 11:39:30.004 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.004771 | orchestrator | 11:39:30.004 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:30.004810 | orchestrator | 11:39:30.004 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-04-05 11:39:30.004825 | orchestrator | 11:39:30.004 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.004843 | orchestrator | 11:39:30.004 STDOUT terraform:  + size = 20 2025-04-05 11:39:30.004916 | orchestrator | 11:39:30.004 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:30.004931 | orchestrator | 11:39:30.004 STDOUT terraform:  } 2025-04-05 11:39:30.004950 | orchestrator | 11:39:30.004 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-04-05 11:39:30.004968 | orchestrator | 11:39:30.004 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-05 11:39:30.005011 | orchestrator | 11:39:30.004 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:30.005030 | orchestrator | 11:39:30.004 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.005071 | orchestrator | 11:39:30.005 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.005114 | orchestrator | 11:39:30.005 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:30.005170 | orchestrator | 11:39:30.005 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-04-05 11:39:30.005248 | orchestrator | 11:39:30.005 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.005265 | orchestrator | 11:39:30.005 STDOUT terraform:  + size = 20 2025-04-05 11:39:30.005282 | orchestrator | 11:39:30.005 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:30.005358 | orchestrator | 11:39:30.005 STDOUT terraform:  } 2025-04-05 11:39:30.005377 | orchestrator | 11:39:30.005 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-04-05 11:39:30.005420 | orchestrator | 11:39:30.005 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-05 11:39:30.005438 | orchestrator | 11:39:30.005 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:30.005481 | orchestrator | 11:39:30.005 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.005509 | orchestrator | 11:39:30.005 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.005564 | orchestrator | 11:39:30.005 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:30.005616 | orchestrator | 11:39:30.005 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-04-05 11:39:30.005670 | orchestrator | 11:39:30.005 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.005686 | orchestrator | 11:39:30.005 STDOUT terraform:  + size = 20 2025-04-05 11:39:30.005703 | orchestrator | 11:39:30.005 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:30.005775 | orchestrator | 11:39:30.005 STDOUT terraform:  } 2025-04-05 11:39:30.005793 | orchestrator | 11:39:30.005 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-04-05 11:39:30.005846 | orchestrator | 11:39:30.005 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-05 11:39:30.005865 | orchestrator | 11:39:30.005 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:30.005882 | orchestrator | 11:39:30.005 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.005933 | orchestrator | 11:39:30.005 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.005960 | orchestrator | 11:39:30.005 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:30.006049 | orchestrator | 11:39:30.005 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-04-05 11:39:30.006072 | orchestrator | 11:39:30.005 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.006090 | orchestrator | 11:39:30.006 STDOUT terraform:  + size = 20 2025-04-05 11:39:30.006108 | orchestrator | 11:39:30.006 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:30.006126 | orchestrator | 11:39:30.006 STDOUT terraform:  } 2025-04-05 11:39:30.006194 | orchestrator | 11:39:30.006 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-04-05 11:39:30.006253 | orchestrator | 11:39:30.006 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-05 11:39:30.006308 | orchestrator | 11:39:30.006 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:30.006359 | orchestrator | 11:39:30.006 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.006377 | orchestrator | 11:39:30.006 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.006445 | orchestrator | 11:39:30.006 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:30.006464 | orchestrator | 11:39:30.006 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-04-05 11:39:30.006481 | orchestrator | 11:39:30.006 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.006498 | orchestrator | 11:39:30.006 STDOUT terraform:  + size = 20 2025-04-05 11:39:30.006516 | orchestrator | 11:39:30.006 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:30.006533 | orchestrator | 11:39:30.006 STDOUT terraform:  } 2025-04-05 11:39:30.006604 | orchestrator | 11:39:30.006 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-04-05 11:39:30.006657 | orchestrator | 11:39:30.006 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-05 11:39:30.006677 | orchestrator | 11:39:30.006 STDOUT terraform:  + attachment = (known after apply) 2025-04-05 11:39:30.006730 | orchestrator | 11:39:30.006 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.006748 | orchestrator | 11:39:30.006 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.006789 | orchestrator | 11:39:30.006 STDOUT terraform:  + metadata = (known after apply) 2025-04-05 11:39:30.006844 | orchestrator | 11:39:30.006 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-04-05 11:39:30.006862 | orchestrator | 11:39:30.006 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.006903 | orchestrator | 11:39:30.006 STDOUT terraform:  + size = 20 2025-04-05 11:39:30.006919 | orchestrator | 11:39:30.006 STDOUT terraform:  + volume_type = "ssd" 2025-04-05 11:39:30.006936 | orchestrator | 11:39:30.006 STDOUT terraform:  } 2025-04-05 11:39:30.006992 | orchestrator | 11:39:30.006 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-04-05 11:39:30.007049 | orchestrator | 11:39:30.006 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-04-05 11:39:30.007092 | orchestrator | 11:39:30.007 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-05 11:39:30.007135 | orchestrator | 11:39:30.007 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-05 11:39:30.007176 | orchestrator | 11:39:30.007 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-05 11:39:30.007247 | orchestrator | 11:39:30.007 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.007265 | orchestrator | 11:39:30.007 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.007282 | orchestrator | 11:39:30.007 STDOUT terraform:  + config_drive = true 2025-04-05 11:39:30.007337 | orchestrator | 11:39:30.007 STDOUT terraform:  + created = (known after apply) 2025-04-05 11:39:30.007355 | orchestrator | 11:39:30.007 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-05 11:39:30.007408 | orchestrator | 11:39:30.007 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-04-05 11:39:30.007426 | orchestrator | 11:39:30.007 STDOUT terraform:  + force_delete = false 2025-04-05 11:39:30.007550 | orchestrator | 11:39:30.007 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.007570 | orchestrator | 11:39:30.007 STDOUT terraform:  + image_id = (known after apply) 2025-04-05 11:39:30.007579 | orchestrator | 11:39:30.007 STDOUT terraform:  + image_name = (known after apply) 2025-04-05 11:39:30.007587 | orchestrator | 11:39:30.007 STDOUT terraform:  + key_pair = "testbed" 2025-04-05 11:39:30.007633 | orchestrator | 11:39:30.007 STDOUT terraform:  + name = "testbed-manager" 2025-04-05 11:39:30.007665 | orchestrator | 11:39:30.007 STDOUT terraform:  + power_state = "active" 2025-04-05 11:39:30.007711 | orchestrator | 11:39:30.007 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.007757 | orchestrator | 11:39:30.007 STDOUT terraform:  + security_groups = (known after apply) 2025-04-05 11:39:30.007787 | orchestrator | 11:39:30.007 STDOUT terraform:  + stop_before_destroy = false 2025-04-05 11:39:30.007833 | orchestrator | 11:39:30.007 STDOUT terraform:  + updated = (known after apply) 2025-04-05 11:39:30.007881 | orchestrator | 11:39:30.007 STDOUT terraform:  + user_data = (known after apply) 2025-04-05 11:39:30.007899 | orchestrator | 11:39:30.007 STDOUT terraform:  + block_device { 2025-04-05 11:39:30.007929 | orchestrator | 11:39:30.007 STDOUT terraform:  + boot_index = 0 2025-04-05 11:39:30.007966 | orchestrator | 11:39:30.007 STDOUT terraform:  + delete_on_termination = false 2025-04-05 11:39:30.008005 | orchestrator | 11:39:30.007 STDOUT terraform:  + destination_type = "volume" 2025-04-05 11:39:30.008041 | orchestrator | 11:39:30.007 STDOUT terraform:  + multiattach = false 2025-04-05 11:39:30.008083 | orchestrator | 11:39:30.008 STDOUT terraform:  + source_type = "volume" 2025-04-05 11:39:30.008133 | orchestrator | 11:39:30.008 STDOUT terraform:  + uuid = (known after apply) 2025-04-05 11:39:30.008143 | orchestrator | 11:39:30.008 STDOUT terraform:  } 2025-04-05 11:39:30.008160 | orchestrator | 11:39:30.008 STDOUT terraform:  + network { 2025-04-05 11:39:30.008188 | orchestrator | 11:39:30.008 STDOUT terraform:  + access_network = false 2025-04-05 11:39:30.008238 | orchestrator | 11:39:30.008 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-05 11:39:30.008280 | orchestrator | 11:39:30.008 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-05 11:39:30.008321 | orchestrator | 11:39:30.008 STDOUT terraform:  + mac = (known after apply) 2025-04-05 11:39:30.008362 | orchestrator | 11:39:30.008 STDOUT terraform:  + name = (known after apply) 2025-04-05 11:39:30.008403 | orchestrator | 11:39:30.008 STDOUT terraform:  + port = (known after apply) 2025-04-05 11:39:30.008445 | orchestrator | 11:39:30.008 STDOUT terraform:  + uuid = (known after apply) 2025-04-05 11:39:30.008467 | orchestrator | 11:39:30.008 STDOUT terraform:  } 2025-04-05 11:39:30.008475 | orchestrator | 11:39:30.008 STDOUT terraform:  } 2025-04-05 11:39:30.008533 | orchestrator | 11:39:30.008 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-04-05 11:39:30.008589 | orchestrator | 11:39:30.008 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-05 11:39:30.008635 | orchestrator | 11:39:30.008 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-05 11:39:30.008682 | orchestrator | 11:39:30.008 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-05 11:39:30.008730 | orchestrator | 11:39:30.008 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-05 11:39:30.008777 | orchestrator | 11:39:30.008 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.008807 | orchestrator | 11:39:30.008 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.008834 | orchestrator | 11:39:30.008 STDOUT terraform:  + config_drive = true 2025-04-05 11:39:30.008880 | orchestrator | 11:39:30.008 STDOUT terraform:  + created = (known after apply) 2025-04-05 11:39:30.008926 | orchestrator | 11:39:30.008 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-05 11:39:30.008964 | orchestrator | 11:39:30.008 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-05 11:39:30.008995 | orchestrator | 11:39:30.008 STDOUT terraform:  + force_delete = false 2025-04-05 11:39:30.009042 | orchestrator | 11:39:30.008 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.009089 | orchestrator | 11:39:30.009 STDOUT terraform:  + image_id = (known after apply) 2025-04-05 11:39:30.009135 | orchestrator | 11:39:30.009 STDOUT terraform:  + image_name = (known after apply) 2025-04-05 11:39:30.009167 | orchestrator | 11:39:30.009 STDOUT terraform:  + key_pair = "testbed" 2025-04-05 11:39:30.009207 | orchestrator | 11:39:30.009 STDOUT terraform:  + name = "testbed-node-0" 2025-04-05 11:39:30.009265 | orchestrator | 11:39:30.009 STDOUT terraform:  + power_state = "active" 2025-04-05 11:39:30.009312 | orchestrator | 11:39:30.009 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.009358 | orchestrator | 11:39:30.009 STDOUT terraform:  + security_groups = (known after apply) 2025-04-05 11:39:30.009389 | orchestrator | 11:39:30.009 STDOUT terraform:  + stop_before_destroy = false 2025-04-05 11:39:30.009436 | orchestrator | 11:39:30.009 STDOUT terraform:  + updated = (known after apply) 2025-04-05 11:39:30.009503 | orchestrator | 11:39:30.009 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-05 11:39:30.009521 | orchestrator | 11:39:30.009 STDOUT terraform:  + block_device { 2025-04-05 11:39:30.009552 | orchestrator | 11:39:30.009 STDOUT terraform:  + boot_index = 0 2025-04-05 11:39:30.009589 | orchestrator | 11:39:30.009 STDOUT terraform:  + delete_on_termination = false 2025-04-05 11:39:30.009627 | orchestrator | 11:39:30.009 STDOUT terraform:  + destination_type = "volume" 2025-04-05 11:39:30.009664 | orchestrator | 11:39:30.009 STDOUT terraform:  + multiattach = false 2025-04-05 11:39:30.009701 | orchestrator | 11:39:30.009 STDOUT terraform:  + source_type = "volume" 2025-04-05 11:39:30.009747 | orchestrator | 11:39:30.009 STDOUT terraform:  + uuid = (known after apply) 2025-04-05 11:39:30.009754 | orchestrator | 11:39:30.009 STDOUT terraform:  } 2025-04-05 11:39:30.009771 | orchestrator | 11:39:30.009 STDOUT terraform:  + network { 2025-04-05 11:39:30.009798 | orchestrator | 11:39:30.009 STDOUT terraform:  + access_network = false 2025-04-05 11:39:30.009835 | orchestrator | 11:39:30.009 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-05 11:39:30.009871 | orchestrator | 11:39:30.009 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-05 11:39:30.009910 | orchestrator | 11:39:30.009 STDOUT terraform:  + mac = (known after apply) 2025-04-05 11:39:30.009947 | orchestrator | 11:39:30.009 STDOUT terraform:  + name = (known after apply) 2025-04-05 11:39:30.009990 | orchestrator | 11:39:30.009 STDOUT terraform:  + port = (known after apply) 2025-04-05 11:39:30.010027 | orchestrator | 11:39:30.009 STDOUT terraform:  + uuid = (known after apply) 2025-04-05 11:39:30.010059 | orchestrator | 11:39:30.010 STDOUT terraform:  } 2025-04-05 11:39:30.010067 | orchestrator | 11:39:30.010 STDOUT terraform:  } 2025-04-05 11:39:30.013651 | orchestrator | 11:39:30.010 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-04-05 11:39:30.013691 | orchestrator | 11:39:30.010 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-05 11:39:30.013699 | orchestrator | 11:39:30.010 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-05 11:39:30.013705 | orchestrator | 11:39:30.010 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-05 11:39:30.013710 | orchestrator | 11:39:30.010 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-05 11:39:30.013716 | orchestrator | 11:39:30.010 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.013721 | orchestrator | 11:39:30.010 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.013726 | orchestrator | 11:39:30.010 STDOUT terraform:  + config_drive = true 2025-04-05 11:39:30.013731 | orchestrator | 11:39:30.010 STDOUT terraform:  + created = (known after apply) 2025-04-05 11:39:30.013736 | orchestrator | 11:39:30.010 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-05 11:39:30.013747 | orchestrator | 11:39:30.010 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-05 11:39:30.013752 | orchestrator | 11:39:30.010 STDOUT terraform:  + force_delete = false 2025-04-05 11:39:30.013757 | orchestrator | 11:39:30.010 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.013762 | orchestrator | 11:39:30.010 STDOUT terraform:  + image_id = (known after apply) 2025-04-05 11:39:30.013766 | orchestrator | 11:39:30.010 STDOUT terraform:  + image_name = (known after apply) 2025-04-05 11:39:30.013771 | orchestrator | 11:39:30.010 STDOUT terraform:  + key_pair = "testbed" 2025-04-05 11:39:30.013777 | orchestrator | 11:39:30.010 STDOUT terraform:  + name = "testbed-node-1" 2025-04-05 11:39:30.013782 | orchestrator | 11:39:30.010 STDOUT terraform:  + power_state = "active" 2025-04-05 11:39:30.013787 | orchestrator | 11:39:30.010 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.013792 | orchestrator | 11:39:30.010 STDOUT terraform:  + security_groups = (known after apply) 2025-04-05 11:39:30.013797 | orchestrator | 11:39:30.010 STDOUT terraform:  + stop_before_destroy = false 2025-04-05 11:39:30.013802 | orchestrator | 11:39:30.010 STDOUT terraform:  + updated = (known after apply) 2025-04-05 11:39:30.013807 | orchestrator | 11:39:30.010 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-05 11:39:30.013812 | orchestrator | 11:39:30.010 STDOUT terraform:  + block_device { 2025-04-05 11:39:30.013817 | orchestrator | 11:39:30.010 STDOUT terraform:  + boot_index = 0 2025-04-05 11:39:30.013822 | orchestrator | 11:39:30.010 STDOUT terraform:  + delete_on_termination = false 2025-04-05 11:39:30.013827 | orchestrator | 11:39:30.010 STDOUT terraform:  + destination_type = "volume" 2025-04-05 11:39:30.013831 | orchestrator | 11:39:30.010 STDOUT terraform:  + multiattach = false 2025-04-05 11:39:30.013845 | orchestrator | 11:39:30.011 STDOUT terraform:  + source_type = "volume" 2025-04-05 11:39:30.013850 | orchestrator | 11:39:30.011 STDOUT terraform:  + uuid = (known after apply) 2025-04-05 11:39:30.013855 | orchestrator | 11:39:30.011 STDOUT terraform:  } 2025-04-05 11:39:30.013860 | orchestrator | 11:39:30.011 STDOUT terraform:  + network { 2025-04-05 11:39:30.013865 | orchestrator | 11:39:30.011 STDOUT terraform:  + access_network = false 2025-04-05 11:39:30.013869 | orchestrator | 11:39:30.011 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-05 11:39:30.013874 | orchestrator | 11:39:30.011 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-05 11:39:30.013880 | orchestrator | 11:39:30.011 STDOUT terraform:  + mac = (known after apply) 2025-04-05 11:39:30.013885 | orchestrator | 11:39:30.011 STDOUT terraform:  + name = (known after apply) 2025-04-05 11:39:30.013896 | orchestrator | 11:39:30.011 STDOUT terraform:  + port = (known after apply) 2025-04-05 11:39:30.013902 | orchestrator | 11:39:30.011 STDOUT terraform:  + uuid = (known after apply) 2025-04-05 11:39:30.013906 | orchestrator | 11:39:30.011 STDOUT terraform:  } 2025-04-05 11:39:30.013911 | orchestrator | 11:39:30.011 STDOUT terraform:  } 2025-04-05 11:39:30.013916 | orchestrator | 11:39:30.011 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-04-05 11:39:30.013921 | orchestrator | 11:39:30.011 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-05 11:39:30.013926 | orchestrator | 11:39:30.011 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-05 11:39:30.013931 | orchestrator | 11:39:30.011 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-05 11:39:30.013938 | orchestrator | 11:39:30.011 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-05 11:39:30.013943 | orchestrator | 11:39:30.011 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.013947 | orchestrator | 11:39:30.011 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.013952 | orchestrator | 11:39:30.011 STDOUT terraform:  + config_drive = true 2025-04-05 11:39:30.013957 | orchestrator | 11:39:30.011 STDOUT terraform:  + created = (known after apply) 2025-04-05 11:39:30.013962 | orchestrator | 11:39:30.011 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-05 11:39:30.013967 | orchestrator | 11:39:30.011 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-05 11:39:30.013972 | orchestrator | 11:39:30.011 STDOUT terraform:  + force_delete = false 2025-04-05 11:39:30.013977 | orchestrator | 11:39:30.011 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.013981 | orchestrator | 11:39:30.011 STDOUT terraform:  + image_id = (known after apply) 2025-04-05 11:39:30.013986 | orchestrator | 11:39:30.011 STDOUT terraform:  + image_name = (known after apply) 2025-04-05 11:39:30.013991 | orchestrator | 11:39:30.011 STDOUT terraform:  + key_pair = "testbed" 2025-04-05 11:39:30.013996 | orchestrator | 11:39:30.011 STDOUT terraform:  + name = "testbed-node-2" 2025-04-05 11:39:30.014003 | orchestrator | 11:39:30.011 STDOUT terraform:  + power_state = "active" 2025-04-05 11:39:30.014008 | orchestrator | 11:39:30.011 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.014030 | orchestrator | 11:39:30.011 STDOUT terraform:  + security_groups = (known after apply) 2025-04-05 11:39:30.014036 | orchestrator | 11:39:30.012 STDOUT terraform:  + stop_before_destroy = false 2025-04-05 11:39:30.014041 | orchestrator | 11:39:30.012 STDOUT terraform:  + updated = (known after apply) 2025-04-05 11:39:30.014046 | orchestrator | 11:39:30.012 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-05 11:39:30.014051 | orchestrator | 11:39:30.012 STDOUT terraform:  + block_device { 2025-04-05 11:39:30.014055 | orchestrator | 11:39:30.012 STDOUT terraform:  + boot_index = 0 2025-04-05 11:39:30.014060 | orchestrator | 11:39:30.012 STDOUT terraform:  + delete_on_termination = false 2025-04-05 11:39:30.014065 | orchestrator | 11:39:30.012 STDOUT terraform:  + destination_type = "volume" 2025-04-05 11:39:30.014070 | orchestrator | 11:39:30.012 STDOUT terraform:  + multiattach = false 2025-04-05 11:39:30.014075 | orchestrator | 11:39:30.012 STDOUT terraform:  + source_type = "volume" 2025-04-05 11:39:30.014080 | orchestrator | 11:39:30.012 STDOUT terraform:  + uuid = (known after apply) 2025-04-05 11:39:30.014084 | orchestrator | 11:39:30.012 STDOUT terraform:  } 2025-04-05 11:39:30.014090 | orchestrator | 11:39:30.012 STDOUT terraform:  + network { 2025-04-05 11:39:30.014094 | orchestrator | 11:39:30.012 STDOUT terraform:  + access_network = false 2025-04-05 11:39:30.014099 | orchestrator | 11:39:30.012 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-05 11:39:30.014108 | orchestrator | 11:39:30.012 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-05 11:39:30.014113 | orchestrator | 11:39:30.012 STDOUT terraform:  + mac = (known after apply) 2025-04-05 11:39:30.014118 | orchestrator | 11:39:30.012 STDOUT terraform:  + name = (known after apply) 2025-04-05 11:39:30.014123 | orchestrator | 11:39:30.012 STDOUT terraform:  + port = (known after apply) 2025-04-05 11:39:30.014128 | orchestrator | 11:39:30.012 STDOUT terraform:  + uuid = (known after apply) 2025-04-05 11:39:30.014133 | orchestrator | 11:39:30.012 STDOUT terraform:  } 2025-04-05 11:39:30.014138 | orchestrator | 11:39:30.012 STDOUT terraform:  } 2025-04-05 11:39:30.014143 | orchestrator | 11:39:30.012 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-04-05 11:39:30.014148 | orchestrator | 11:39:30.012 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-05 11:39:30.014152 | orchestrator | 11:39:30.012 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-05 11:39:30.014157 | orchestrator | 11:39:30.012 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-05 11:39:30.014162 | orchestrator | 11:39:30.012 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-05 11:39:30.014167 | orchestrator | 11:39:30.012 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.014178 | orchestrator | 11:39:30.012 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.014184 | orchestrator | 11:39:30.012 STDOUT terraform:  + config_drive = true 2025-04-05 11:39:30.014189 | orchestrator | 11:39:30.012 STDOUT terraform:  + created = (known after apply) 2025-04-05 11:39:30.014194 | orchestrator | 11:39:30.012 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-05 11:39:30.014201 | orchestrator | 11:39:30.012 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-05 11:39:30.014206 | orchestrator | 11:39:30.012 STDOUT terraform:  + force_delete = false 2025-04-05 11:39:30.014211 | orchestrator | 11:39:30.012 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.014236 | orchestrator | 11:39:30.012 STDOUT terraform:  + image_id = (known after apply) 2025-04-05 11:39:30.014242 | orchestrator | 11:39:30.013 STDOUT terraform:  + image_name = (known after apply) 2025-04-05 11:39:30.014247 | orchestrator | 11:39:30.013 STDOUT terraform:  + key_pair = "testbed" 2025-04-05 11:39:30.014251 | orchestrator | 11:39:30.013 STDOUT terraform:  + name = "testbed-node-3" 2025-04-05 11:39:30.014256 | orchestrator | 11:39:30.013 STDOUT terraform:  + power_state = "active" 2025-04-05 11:39:30.014261 | orchestrator | 11:39:30.013 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.014266 | orchestrator | 11:39:30.013 STDOUT terraform:  + security_groups = (known after apply) 2025-04-05 11:39:30.014271 | orchestrator | 11:39:30.013 STDOUT terraform:  + stop_before_destroy = false 2025-04-05 11:39:30.014275 | orchestrator | 11:39:30.013 STDOUT terraform:  + updated = (known after apply) 2025-04-05 11:39:30.014280 | orchestrator | 11:39:30.013 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-05 11:39:30.014285 | orchestrator | 11:39:30.013 STDOUT terraform:  + block_device { 2025-04-05 11:39:30.014290 | orchestrator | 11:39:30.013 STDOUT terraform:  + boot_index = 0 2025-04-05 11:39:30.014295 | orchestrator | 11:39:30.013 STDOUT terraform:  + delete_on_termination = false 2025-04-05 11:39:30.014300 | orchestrator | 11:39:30.013 STDOUT terraform:  + destination_type = "volume" 2025-04-05 11:39:30.014305 | orchestrator | 11:39:30.013 STDOUT terraform:  + multiattach = false 2025-04-05 11:39:30.014309 | orchestrator | 11:39:30.013 STDOUT terraform:  + source_type = "volume" 2025-04-05 11:39:30.014314 | orchestrator | 11:39:30.013 STDOUT terraform:  + uuid = (known after apply) 2025-04-05 11:39:30.014319 | orchestrator | 11:39:30.013 STDOUT terraform:  } 2025-04-05 11:39:30.014324 | orchestrator | 11:39:30.013 STDOUT terraform:  + network { 2025-04-05 11:39:30.014332 | orchestrator | 11:39:30.013 STDOUT terraform:  + access_network = false 2025-04-05 11:39:30.014337 | orchestrator | 11:39:30.013 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-05 11:39:30.014342 | orchestrator | 11:39:30.013 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-05 11:39:30.014347 | orchestrator | 11:39:30.013 STDOUT terraform:  + mac = (known after apply) 2025-04-05 11:39:30.014355 | orchestrator | 11:39:30.013 STDOUT terraform:  + name = (known after apply) 2025-04-05 11:39:30.014360 | orchestrator | 11:39:30.013 STDOUT terraform:  + port = (known after apply) 2025-04-05 11:39:30.014365 | orchestrator | 11:39:30.013 STDOUT terraform:  + uuid = (known after apply) 2025-04-05 11:39:30.014369 | orchestrator | 11:39:30.013 STDOUT terraform:  } 2025-04-05 11:39:30.014375 | orchestrator | 11:39:30.013 STDOUT terraform:  } 2025-04-05 11:39:30.014379 | orchestrator | 11:39:30.013 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-04-05 11:39:30.014384 | orchestrator | 11:39:30.013 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-05 11:39:30.014389 | orchestrator | 11:39:30.013 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-05 11:39:30.014394 | orchestrator | 11:39:30.013 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-05 11:39:30.014399 | orchestrator | 11:39:30.013 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-05 11:39:30.014404 | orchestrator | 11:39:30.013 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.014408 | orchestrator | 11:39:30.013 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.014413 | orchestrator | 11:39:30.013 STDOUT terraform:  + config_drive = true 2025-04-05 11:39:30.014439 | orchestrator | 11:39:30.013 STDOUT terraform:  + created = (known after apply) 2025-04-05 11:39:30.014444 | orchestrator | 11:39:30.014 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-05 11:39:30.014449 | orchestrator | 11:39:30.014 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-05 11:39:30.014454 | orchestrator | 11:39:30.014 STDOUT terraform:  + force_delete = false 2025-04-05 11:39:30.014459 | orchestrator | 11:39:30.014 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.014463 | orchestrator | 11:39:30.014 STDOUT terraform:  + image_id = (known after apply) 2025-04-05 11:39:30.014468 | orchestrator | 11:39:30.014 STDOUT terraform:  + image_name = (known after apply) 2025-04-05 11:39:30.014473 | orchestrator | 11:39:30.014 STDOUT terraform:  + key_pair = "testbed" 2025-04-05 11:39:30.014478 | orchestrator | 11:39:30.014 STDOUT terraform:  + name = "testbed-node-4" 2025-04-05 11:39:30.014483 | orchestrator | 11:39:30.014 STDOUT terraform:  + power_state = "active" 2025-04-05 11:39:30.014489 | orchestrator | 11:39:30.014 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.014520 | orchestrator | 11:39:30.014 STDOUT terraform:  + security_groups = (known after apply) 2025-04-05 11:39:30.014526 | orchestrator | 11:39:30.014 STDOUT terraform:  + stop_before_destroy = false 2025-04-05 11:39:30.014531 | orchestrator | 11:39:30.014 STDOUT terraform:  + updated = (known after apply) 2025-04-05 11:39:30.014536 | orchestrator | 11:39:30.014 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-05 11:39:30.014541 | orchestrator | 11:39:30.014 STDOUT terraform:  + block_device { 2025-04-05 11:39:30.014548 | orchestrator | 11:39:30.014 STDOUT terraform:  + boot_index = 0 2025-04-05 11:39:30.014578 | orchestrator | 11:39:30.014 STDOUT terraform:  + delete_on_termination = false 2025-04-05 11:39:30.014585 | orchestrator | 11:39:30.014 STDOUT terraform:  + destination_type = "volume" 2025-04-05 11:39:30.014613 | orchestrator | 11:39:30.014 STDOUT terraform:  + multiattach = false 2025-04-05 11:39:30.014643 | orchestrator | 11:39:30.014 STDOUT terraform:  + source_type = "volume" 2025-04-05 11:39:30.014682 | orchestrator | 11:39:30.014 STDOUT terraform:  + uuid = (known after apply) 2025-04-05 11:39:30.014690 | orchestrator | 11:39:30.014 STDOUT terraform:  } 2025-04-05 11:39:30.014707 | orchestrator | 11:39:30.014 STDOUT terraform:  + network { 2025-04-05 11:39:30.014729 | orchestrator | 11:39:30.014 STDOUT terraform:  + access_network = false 2025-04-05 11:39:30.014767 | orchestrator | 11:39:30.014 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-05 11:39:30.014797 | orchestrator | 11:39:30.014 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-05 11:39:30.014842 | orchestrator | 11:39:30.014 STDOUT terraform:  + mac = (known after apply) 2025-04-05 11:39:30.014877 | orchestrator | 11:39:30.014 STDOUT terraform:  + name = (known after apply) 2025-04-05 11:39:30.014918 | orchestrator | 11:39:30.014 STDOUT terraform:  + port = (known after apply) 2025-04-05 11:39:30.014945 | orchestrator | 11:39:30.014 STDOUT terraform:  + uuid = (known after apply) 2025-04-05 11:39:30.014960 | orchestrator | 11:39:30.014 STDOUT terraform:  } 2025-04-05 11:39:30.014967 | orchestrator | 11:39:30.014 STDOUT terraform:  } 2025-04-05 11:39:30.015069 | orchestrator | 11:39:30.015 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-04-05 11:39:30.015116 | orchestrator | 11:39:30.015 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-05 11:39:30.015156 | orchestrator | 11:39:30.015 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-05 11:39:30.015193 | orchestrator | 11:39:30.015 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-05 11:39:30.015259 | orchestrator | 11:39:30.015 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-05 11:39:30.015297 | orchestrator | 11:39:30.015 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.015323 | orchestrator | 11:39:30.015 STDOUT terraform:  + availability_zone = "nova" 2025-04-05 11:39:30.015345 | orchestrator | 11:39:30.015 STDOUT terraform:  + config_drive = true 2025-04-05 11:39:30.015384 | orchestrator | 11:39:30.015 STDOUT terraform:  + created = (known after apply) 2025-04-05 11:39:30.015422 | orchestrator | 11:39:30.015 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-05 11:39:30.015457 | orchestrator | 11:39:30.015 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-05 11:39:30.015479 | orchestrator | 11:39:30.015 STDOUT terraform:  + force_delete = false 2025-04-05 11:39:30.015518 | orchestrator | 11:39:30.015 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.015558 | orchestrator | 11:39:30.015 STDOUT terraform:  + image_id = (known after apply) 2025-04-05 11:39:30.015596 | orchestrator | 11:39:30.015 STDOUT terraform:  + image_name = (known after apply) 2025-04-05 11:39:30.015626 | orchestrator | 11:39:30.015 STDOUT terraform:  + key_pair = "testbed" 2025-04-05 11:39:30.015658 | orchestrator | 11:39:30.015 STDOUT terraform:  + name = "testbed-node-5" 2025-04-05 11:39:30.015685 | orchestrator | 11:39:30.015 STDOUT terraform:  + power_state = "active" 2025-04-05 11:39:30.015725 | orchestrator | 11:39:30.015 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.015763 | orchestrator | 11:39:30.015 STDOUT terraform:  + security_groups = (known after apply) 2025-04-05 11:39:30.015788 | orchestrator | 11:39:30.015 STDOUT terraform:  + stop_before_destroy = false 2025-04-05 11:39:30.015827 | orchestrator | 11:39:30.015 STDOUT terraform:  + updated = (known after apply) 2025-04-05 11:39:30.015881 | orchestrator | 11:39:30.015 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-05 11:39:30.015911 | orchestrator | 11:39:30.015 STDOUT terraform:  + block_device { 2025-04-05 11:39:30.015918 | orchestrator | 11:39:30.015 STDOUT terraform:  + boot_index = 0 2025-04-05 11:39:30.015954 | orchestrator | 11:39:30.015 STDOUT terraform:  + delete_on_termination = false 2025-04-05 11:39:30.015985 | orchestrator | 11:39:30.015 STDOUT terraform:  + destination_type = "volume" 2025-04-05 11:39:30.016014 | orchestrator | 11:39:30.015 STDOUT terraform:  + multiattach = false 2025-04-05 11:39:30.016046 | orchestrator | 11:39:30.016 STDOUT terraform:  + source_type = "volume" 2025-04-05 11:39:30.016086 | orchestrator | 11:39:30.016 STDOUT terraform:  + uuid = (known after apply) 2025-04-05 11:39:30.016094 | orchestrator | 11:39:30.016 STDOUT terraform:  } 2025-04-05 11:39:30.016100 | orchestrator | 11:39:30.016 STDOUT terraform:  + network { 2025-04-05 11:39:30.016129 | orchestrator | 11:39:30.016 STDOUT terraform:  + access_network = false 2025-04-05 11:39:30.016161 | orchestrator | 11:39:30.016 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-05 11:39:30.016194 | orchestrator | 11:39:30.016 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-05 11:39:30.016232 | orchestrator | 11:39:30.016 STDOUT terraform:  + mac = (known after apply) 2025-04-05 11:39:30.016265 | orchestrator | 11:39:30.016 STDOUT terraform:  + name = (known after apply) 2025-04-05 11:39:30.016298 | orchestrator | 11:39:30.016 STDOUT terraform:  + port = (known after apply) 2025-04-05 11:39:30.016331 | orchestrator | 11:39:30.016 STDOUT terraform:  + uuid = (known after apply) 2025-04-05 11:39:30.016338 | orchestrator | 11:39:30.016 STDOUT terraform:  } 2025-04-05 11:39:30.016345 | orchestrator | 11:39:30.016 STDOUT terraform:  } 2025-04-05 11:39:30.016390 | orchestrator | 11:39:30.016 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-04-05 11:39:30.016424 | orchestrator | 11:39:30.016 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-04-05 11:39:30.016453 | orchestrator | 11:39:30.016 STDOUT terraform:  + fingerprint = (known after apply) 2025-04-05 11:39:30.016485 | orchestrator | 11:39:30.016 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.016496 | orchestrator | 11:39:30.016 STDOUT terraform:  + name = "testbed" 2025-04-05 11:39:30.016528 | orchestrator | 11:39:30.016 STDOUT terraform:  + private_key = (sensitive value) 2025-04-05 11:39:30.016557 | orchestrator | 11:39:30.016 STDOUT terraform:  + public_key = (known after apply) 2025-04-05 11:39:30.016586 | orchestrator | 11:39:30.016 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.016616 | orchestrator | 11:39:30.016 STDOUT terraform:  + user_id = (known after apply) 2025-04-05 11:39:30.016623 | orchestrator | 11:39:30.016 STDOUT terraform:  } 2025-04-05 11:39:30.016680 | orchestrator | 11:39:30.016 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-04-05 11:39:30.016733 | orchestrator | 11:39:30.016 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-05 11:39:30.016763 | orchestrator | 11:39:30.016 STDOUT terraform:  + device = (known after apply) 2025-04-05 11:39:30.016793 | orchestrator | 11:39:30.016 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.016822 | orchestrator | 11:39:30.016 STDOUT terraform:  + instance_id = (known after apply) 2025-04-05 11:39:30.016853 | orchestrator | 11:39:30.016 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.016881 | orchestrator | 11:39:30.016 STDOUT terraform:  + volume_id = (known after apply) 2025-04-05 11:39:30.016888 | orchestrator | 11:39:30.016 STDOUT terraform:  } 2025-04-05 11:39:30.016945 | orchestrator | 11:39:30.016 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-04-05 11:39:30.016995 | orchestrator | 11:39:30.016 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-05 11:39:30.017025 | orchestrator | 11:39:30.016 STDOUT terraform:  + device = (known after apply) 2025-04-05 11:39:30.017055 | orchestrator | 11:39:30.017 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.017083 | orchestrator | 11:39:30.017 STDOUT terraform:  + instance_id = (known after apply) 2025-04-05 11:39:30.017113 | orchestrator | 11:39:30.017 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.017144 | orchestrator | 11:39:30.017 STDOUT terraform:  + volume_id = (known after apply) 2025-04-05 11:39:30.017152 | orchestrator | 11:39:30.017 STDOUT terraform:  } 2025-04-05 11:39:30.017203 | orchestrator | 11:39:30.017 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-04-05 11:39:30.017262 | orchestrator | 11:39:30.017 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-05 11:39:30.017292 | orchestrator | 11:39:30.017 STDOUT terraform:  + device = (known after apply) 2025-04-05 11:39:30.017321 | orchestrator | 11:39:30.017 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.017352 | orchestrator | 11:39:30.017 STDOUT terraform:  + instance_id = (known after apply) 2025-04-05 11:39:30.017381 | orchestrator | 11:39:30.017 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.017410 | orchestrator | 11:39:30.017 STDOUT terraform:  + volume_id = (known after apply) 2025-04-05 11:39:30.017420 | orchestrator | 11:39:30.017 STDOUT terraform:  } 2025-04-05 11:39:30.017472 | orchestrator | 11:39:30.017 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-04-05 11:39:30.017522 | orchestrator | 11:39:30.017 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-05 11:39:30.017551 | orchestrator | 11:39:30.017 STDOUT terraform:  + device = (known after apply) 2025-04-05 11:39:30.017582 | orchestrator | 11:39:30.017 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.017611 | orchestrator | 11:39:30.017 STDOUT terraform:  + instance_id = (known after apply) 2025-04-05 11:39:30.017641 | orchestrator | 11:39:30.017 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.017671 | orchestrator | 11:39:30.017 STDOUT terraform:  + volume_id = (known after apply) 2025-04-05 11:39:30.017678 | orchestrator | 11:39:30.017 STDOUT terraform:  } 2025-04-05 11:39:30.017735 | orchestrator | 11:39:30.017 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-04-05 11:39:30.017786 | orchestrator | 11:39:30.017 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-05 11:39:30.017816 | orchestrator | 11:39:30.017 STDOUT terraform:  + device = (known after apply) 2025-04-05 11:39:30.017846 | orchestrator | 11:39:30.017 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.017876 | orchestrator | 11:39:30.017 STDOUT terraform:  + instance_id = (known after apply) 2025-04-05 11:39:30.017906 | orchestrator | 11:39:30.017 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.017935 | orchestrator | 11:39:30.017 STDOUT terraform:  + volume_id = (known after apply) 2025-04-05 11:39:30.017942 | orchestrator | 11:39:30.017 STDOUT terraform:  } 2025-04-05 11:39:30.017996 | orchestrator | 11:39:30.017 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-04-05 11:39:30.018062 | orchestrator | 11:39:30.017 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-05 11:39:30.018092 | orchestrator | 11:39:30.018 STDOUT terraform:  + device = (known after apply) 2025-04-05 11:39:30.018121 | orchestrator | 11:39:30.018 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.018150 | orchestrator | 11:39:30.018 STDOUT terraform:  + instance_id = (known after apply) 2025-04-05 11:39:30.018179 | orchestrator | 11:39:30.018 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.018210 | orchestrator | 11:39:30.018 STDOUT terraform:  + volume_id = (known after apply) 2025-04-05 11:39:30.018240 | orchestrator | 11:39:30.018 STDOUT terraform:  } 2025-04-05 11:39:30.018293 | orchestrator | 11:39:30.018 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-04-05 11:39:30.018343 | orchestrator | 11:39:30.018 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-05 11:39:30.018373 | orchestrator | 11:39:30.018 STDOUT terraform:  + device = (known after apply) 2025-04-05 11:39:30.018403 | orchestrator | 11:39:30.018 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.018433 | orchestrator | 11:39:30.018 STDOUT terraform:  + instance_id = (known after apply) 2025-04-05 11:39:30.018464 | orchestrator | 11:39:30.018 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.018496 | orchestrator | 11:39:30.018 STDOUT terraform:  + volume_id = (known after apply) 2025-04-05 11:39:30.018503 | orchestrator | 11:39:30.018 STDOUT terraform:  } 2025-04-05 11:39:30.018559 | orchestrator | 11:39:30.018 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-04-05 11:39:30.018610 | orchestrator | 11:39:30.018 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-05 11:39:30.018639 | orchestrator | 11:39:30.018 STDOUT terraform:  + device = (known after apply) 2025-04-05 11:39:30.018669 | orchestrator | 11:39:30.018 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.018699 | orchestrator | 11:39:30.018 STDOUT terraform:  + instance_id = (known after apply) 2025-04-05 11:39:30.018731 | orchestrator | 11:39:30.018 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.018759 | orchestrator | 11:39:30.018 STDOUT terraform:  + volume_id = (known after apply) 2025-04-05 11:39:30.018766 | orchestrator | 11:39:30.018 STDOUT terraform:  } 2025-04-05 11:39:30.018821 | orchestrator | 11:39:30.018 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-04-05 11:39:30.018871 | orchestrator | 11:39:30.018 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-05 11:39:30.018900 | orchestrator | 11:39:30.018 STDOUT terraform:  + device = (known after apply) 2025-04-05 11:39:30.018930 | orchestrator | 11:39:30.018 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.018959 | orchestrator | 11:39:30.018 STDOUT terraform:  + instance_id = (known after apply) 2025-04-05 11:39:30.018989 | orchestrator | 11:39:30.018 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.019017 | orchestrator | 11:39:30.018 STDOUT terraform:  + volume_id = (known after apply) 2025-04-05 11:39:30.019024 | orchestrator | 11:39:30.019 STDOUT terraform:  } 2025-04-05 11:39:30.019079 | orchestrator | 11:39:30.019 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-04-05 11:39:30.019130 | orchestrator | 11:39:30.019 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-05 11:39:30.019160 | orchestrator | 11:39:30.019 STDOUT terraform:  + device = (known after apply) 2025-04-05 11:39:30.019191 | orchestrator | 11:39:30.019 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.019234 | orchestrator | 11:39:30.019 STDOUT terraform:  + instance_id = (known after apply) 2025-04-05 11:39:30.019258 | orchestrator | 11:39:30.019 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.019287 | orchestrator | 11:39:30.019 STDOUT terraform:  + volume_id = (known after apply) 2025-04-05 11:39:30.019294 | orchestrator | 11:39:30.019 STDOUT terraform:  } 2025-04-05 11:39:30.019352 | orchestrator | 11:39:30.019 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-04-05 11:39:30.019402 | orchestrator | 11:39:30.019 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-05 11:39:30.019432 | orchestrator | 11:39:30.019 STDOUT terraform:  + device = (known after apply) 2025-04-05 11:39:30.019462 | orchestrator | 11:39:30.019 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.019493 | orchestrator | 11:39:30.019 STDOUT terraform:  + instance_id = (known after apply) 2025-04-05 11:39:30.019521 | orchestrator | 11:39:30.019 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.019551 | orchestrator | 11:39:30.019 STDOUT terraform:  + volume_id = (known after apply) 2025-04-05 11:39:30.019558 | orchestrator | 11:39:30.019 STDOUT terraform:  } 2025-04-05 11:39:30.019613 | orchestrator | 11:39:30.019 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-04-05 11:39:30.019665 | orchestrator | 11:39:30.019 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-05 11:39:30.019695 | orchestrator | 11:39:30.019 STDOUT terraform:  + device = (known after apply) 2025-04-05 11:39:30.019724 | orchestrator | 11:39:30.019 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.019753 | orchestrator | 11:39:30.019 STDOUT terraform:  + instance_id = (known after apply) 2025-04-05 11:39:30.019785 | orchestrator | 11:39:30.019 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.019814 | orchestrator | 11:39:30.019 STDOUT terraform:  + volume_id = (known after apply) 2025-04-05 11:39:30.019821 | orchestrator | 11:39:30.019 STDOUT terraform:  } 2025-04-05 11:39:30.019877 | orchestrator | 11:39:30.019 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-04-05 11:39:30.019928 | orchestrator | 11:39:30.019 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-05 11:39:30.019957 | orchestrator | 11:39:30.019 STDOUT terraform:  + device = (known after apply) 2025-04-05 11:39:30.019987 | orchestrator | 11:39:30.019 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.020016 | orchestrator | 11:39:30.019 STDOUT terraform:  + instance_id = (known after apply) 2025-04-05 11:39:30.020047 | orchestrator | 11:39:30.020 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.020077 | orchestrator | 11:39:30.020 STDOUT terraform:  + volume_id = (known after apply) 2025-04-05 11:39:30.020084 | orchestrator | 11:39:30.020 STDOUT terraform:  } 2025-04-05 11:39:30.020138 | orchestrator | 11:39:30.020 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-04-05 11:39:30.020189 | orchestrator | 11:39:30.020 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-05 11:39:30.020227 | orchestrator | 11:39:30.020 STDOUT terraform:  + device = (known after apply) 2025-04-05 11:39:30.020259 | orchestrator | 11:39:30.020 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.020287 | orchestrator | 11:39:30.020 STDOUT terraform:  + instance_id = (known after apply) 2025-04-05 11:39:30.020315 | orchestrator | 11:39:30.020 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.020345 | orchestrator | 11:39:30.020 STDOUT terraform:  + volume_id = (known after apply) 2025-04-05 11:39:30.020356 | orchestrator | 11:39:30.020 STDOUT terraform:  } 2025-04-05 11:39:30.020409 | orchestrator | 11:39:30.020 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-04-05 11:39:30.020459 | orchestrator | 11:39:30.020 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-05 11:39:30.020488 | orchestrator | 11:39:30.020 STDOUT terraform:  + device = (known after apply) 2025-04-05 11:39:30.020518 | orchestrator | 11:39:30.020 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.020558 | orchestrator | 11:39:30.020 STDOUT terraform:  + instance_id = (known after apply) 2025-04-05 11:39:30.020588 | orchestrator | 11:39:30.020 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.020616 | orchestrator | 11:39:30.020 STDOUT terraform:  + volume_id = (known after apply) 2025-04-05 11:39:30.020625 | orchestrator | 11:39:30.020 STDOUT terraform:  } 2025-04-05 11:39:30.020678 | orchestrator | 11:39:30.020 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-04-05 11:39:30.020728 | orchestrator | 11:39:30.020 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-05 11:39:30.020758 | orchestrator | 11:39:30.020 STDOUT terraform:  + device = (known after apply) 2025-04-05 11:39:30.020789 | orchestrator | 11:39:30.020 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.020819 | orchestrator | 11:39:30.020 STDOUT terraform:  + instance_id = (known after apply) 2025-04-05 11:39:30.020848 | orchestrator | 11:39:30.020 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.020877 | orchestrator | 11:39:30.020 STDOUT terraform:  + volume_id = (known after apply) 2025-04-05 11:39:30.020883 | orchestrator | 11:39:30.020 STDOUT terraform:  } 2025-04-05 11:39:30.020939 | orchestrator | 11:39:30.020 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-04-05 11:39:30.020989 | orchestrator | 11:39:30.020 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-05 11:39:30.021020 | orchestrator | 11:39:30.020 STDOUT terraform:  + device = (known after apply) 2025-04-05 11:39:30.021051 | orchestrator | 11:39:30.021 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.021079 | orchestrator | 11:39:30.021 STDOUT terraform:  + instance_id = (known after apply) 2025-04-05 11:39:30.021108 | orchestrator | 11:39:30.021 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.021138 | orchestrator | 11:39:30.021 STDOUT terraform:  + volume_id = (known after apply) 2025-04-05 11:39:30.021145 | orchestrator | 11:39:30.021 STDOUT terraform:  } 2025-04-05 11:39:30.021199 | orchestrator | 11:39:30.021 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-04-05 11:39:30.021272 | orchestrator | 11:39:30.021 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-05 11:39:30.021293 | orchestrator | 11:39:30.021 STDOUT terraform:  + device = (known after apply) 2025-04-05 11:39:30.021324 | orchestrator | 11:39:30.021 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.021357 | orchestrator | 11:39:30.021 STDOUT terraform:  + instance_id = (known after apply) 2025-04-05 11:39:30.021387 | orchestrator | 11:39:30.021 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.021416 | orchestrator | 11:39:30.021 STDOUT terraform:  + volume_id = (known after apply) 2025-04-05 11:39:30.021423 | orchestrator | 11:39:30.021 STDOUT terraform:  } 2025-04-05 11:39:30.021483 | orchestrator | 11:39:30.021 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-04-05 11:39:30.021542 | orchestrator | 11:39:30.021 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-04-05 11:39:30.021579 | orchestrator | 11:39:30.021 STDOUT terraform:  + fixed_ip = (known after apply) 2025-04-05 11:39:30.022625 | orchestrator | 11:39:30.021 STDOUT terraform:  + floating_ip = (known after apply) 2025-04-05 11:39:30.022655 | orchestrator | 11:39:30.021 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.022661 | orchestrator | 11:39:30.021 STDOUT terraform:  + port_id = (known after apply) 2025-04-05 11:39:30.022668 | orchestrator | 11:39:30.021 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.022674 | orchestrator | 11:39:30.021 STDOUT terraform:  } 2025-04-05 11:39:30.022679 | orchestrator | 11:39:30.021 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-04-05 11:39:30.022684 | orchestrator | 11:39:30.021 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-04-05 11:39:30.022689 | orchestrator | 11:39:30.021 STDOUT terraform:  + address = (known after apply) 2025-04-05 11:39:30.022694 | orchestrator | 11:39:30.021 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.022698 | orchestrator | 11:39:30.021 STDOUT terraform:  + dns_domain = (known after apply) 2025-04-05 11:39:30.022703 | orchestrator | 11:39:30.021 STDOUT terraform:  + dns_name = (known after apply) 2025-04-05 11:39:30.022708 | orchestrator | 11:39:30.021 STDOUT terraform:  + fixed_ip = (known after apply) 2025-04-05 11:39:30.022712 | orchestrator | 11:39:30.021 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.022717 | orchestrator | 11:39:30.021 STDOUT terraform:  + pool = "public" 2025-04-05 11:39:30.022722 | orchestrator | 11:39:30.021 STDOUT terraform:  + port_id = (known after apply) 2025-04-05 11:39:30.022727 | orchestrator | 11:39:30.021 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.022731 | orchestrator | 11:39:30.021 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-05 11:39:30.022736 | orchestrator | 11:39:30.021 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.022741 | orchestrator | 11:39:30.022 STDOUT terraform:  } 2025-04-05 11:39:30.022746 | orchestrator | 11:39:30.022 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-04-05 11:39:30.022750 | orchestrator | 11:39:30.022 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-04-05 11:39:30.022755 | orchestrator | 11:39:30.022 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-05 11:39:30.022760 | orchestrator | 11:39:30.022 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.022769 | orchestrator | 11:39:30.022 STDOUT terraform:  + availability_zone_hints = [ 2025-04-05 11:39:30.022774 | orchestrator | 11:39:30.022 STDOUT terraform:  + "nova", 2025-04-05 11:39:30.022779 | orchestrator | 11:39:30.022 STDOUT terraform:  ] 2025-04-05 11:39:30.022784 | orchestrator | 11:39:30.022 STDOUT terraform:  + dns_domain = (known after apply) 2025-04-05 11:39:30.022788 | orchestrator | 11:39:30.022 STDOUT terraform:  + external = (known after apply) 2025-04-05 11:39:30.022793 | orchestrator | 11:39:30.022 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.022798 | orchestrator | 11:39:30.022 STDOUT terraform:  + mtu = (known after apply) 2025-04-05 11:39:30.022803 | orchestrator | 11:39:30.022 STDOUT terraform:  + name = "net-testbed-management" 2025-04-05 11:39:30.022808 | orchestrator | 11:39:30.022 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-05 11:39:30.022813 | orchestrator | 11:39:30.022 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-05 11:39:30.022817 | orchestrator | 11:39:30.022 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.022822 | orchestrator | 11:39:30.022 STDOUT terraform:  + shared = (known after apply) 2025-04-05 11:39:30.022827 | orchestrator | 11:39:30.022 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.022834 | orchestrator | 11:39:30.022 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-04-05 11:39:30.022851 | orchestrator | 11:39:30.022 STDOUT terraform:  + segments (known after apply) 2025-04-05 11:39:30.022856 | orchestrator | 11:39:30.022 STDOUT terraform:  } 2025-04-05 11:39:30.022861 | orchestrator | 11:39:30.022 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-04-05 11:39:30.022866 | orchestrator | 11:39:30.022 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-04-05 11:39:30.022871 | orchestrator | 11:39:30.022 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-05 11:39:30.022877 | orchestrator | 11:39:30.022 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-05 11:39:30.022883 | orchestrator | 11:39:30.022 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-05 11:39:30.022921 | orchestrator | 11:39:30.022 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.022960 | orchestrator | 11:39:30.022 STDOUT terraform:  + device_id = (known after apply) 2025-04-05 11:39:30.022998 | orchestrator | 11:39:30.022 STDOUT terraform:  + device_owner = (known after apply) 2025-04-05 11:39:30.023037 | orchestrator | 11:39:30.022 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-05 11:39:30.023075 | orchestrator | 11:39:30.023 STDOUT terraform:  + dns_name = (known after apply) 2025-04-05 11:39:30.023115 | orchestrator | 11:39:30.023 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.023153 | orchestrator | 11:39:30.023 STDOUT terraform:  + mac_address = (known after apply) 2025-04-05 11:39:30.023193 | orchestrator | 11:39:30.023 STDOUT terraform:  + network_id = (known after apply) 2025-04-05 11:39:30.023237 | orchestrator | 11:39:30.023 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-05 11:39:30.023274 | orchestrator | 11:39:30.023 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-05 11:39:30.023313 | orchestrator | 11:39:30.023 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.023350 | orchestrator | 11:39:30.023 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-05 11:39:30.023388 | orchestrator | 11:39:30.023 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.023409 | orchestrator | 11:39:30.023 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.023440 | orchestrator | 11:39:30.023 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-05 11:39:30.023447 | orchestrator | 11:39:30.023 STDOUT terraform:  } 2025-04-05 11:39:30.023470 | orchestrator | 11:39:30.023 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.023500 | orchestrator | 11:39:30.023 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-05 11:39:30.023507 | orchestrator | 11:39:30.023 STDOUT terraform:  } 2025-04-05 11:39:30.023535 | orchestrator | 11:39:30.023 STDOUT terraform:  + binding (known after apply) 2025-04-05 11:39:30.023542 | orchestrator | 11:39:30.023 STDOUT terraform:  + fixed_ip { 2025-04-05 11:39:30.023572 | orchestrator | 11:39:30.023 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-04-05 11:39:30.023603 | orchestrator | 11:39:30.023 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-05 11:39:30.023610 | orchestrator | 11:39:30.023 STDOUT terraform:  } 2025-04-05 11:39:30.023627 | orchestrator | 11:39:30.023 STDOUT terraform:  } 2025-04-05 11:39:30.023676 | orchestrator | 11:39:30.023 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-04-05 11:39:30.023724 | orchestrator | 11:39:30.023 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-05 11:39:30.023761 | orchestrator | 11:39:30.023 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-05 11:39:30.023799 | orchestrator | 11:39:30.023 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-05 11:39:30.023836 | orchestrator | 11:39:30.023 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-05 11:39:30.023874 | orchestrator | 11:39:30.023 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.023912 | orchestrator | 11:39:30.023 STDOUT terraform:  + device_id = (known after apply) 2025-04-05 11:39:30.023950 | orchestrator | 11:39:30.023 STDOUT terraform:  + device_owner = (known after apply) 2025-04-05 11:39:30.023989 | orchestrator | 11:39:30.023 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-05 11:39:30.024028 | orchestrator | 11:39:30.023 STDOUT terraform:  + dns_name = (known after apply) 2025-04-05 11:39:30.024066 | orchestrator | 11:39:30.024 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.024103 | orchestrator | 11:39:30.024 STDOUT terraform:  + mac_address = (known after apply) 2025-04-05 11:39:30.024140 | orchestrator | 11:39:30.024 STDOUT terraform:  + network_id = (known after apply) 2025-04-05 11:39:30.024177 | orchestrator | 11:39:30.024 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-05 11:39:30.024231 | orchestrator | 11:39:30.024 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-05 11:39:30.024288 | orchestrator | 11:39:30.024 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.024318 | orchestrator | 11:39:30.024 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-05 11:39:30.024356 | orchestrator | 11:39:30.024 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.024377 | orchestrator | 11:39:30.024 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.024408 | orchestrator | 11:39:30.024 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-05 11:39:30.024415 | orchestrator | 11:39:30.024 STDOUT terraform:  } 2025-04-05 11:39:30.024439 | orchestrator | 11:39:30.024 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.024471 | orchestrator | 11:39:30.024 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-05 11:39:30.024478 | orchestrator | 11:39:30.024 STDOUT terraform:  } 2025-04-05 11:39:30.024515 | orchestrator | 11:39:30.024 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.024556 | orchestrator | 11:39:30.024 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-05 11:39:30.024564 | orchestrator | 11:39:30.024 STDOUT terraform:  } 2025-04-05 11:39:30.024588 | orchestrator | 11:39:30.024 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.024619 | orchestrator | 11:39:30.024 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-05 11:39:30.024626 | orchestrator | 11:39:30.024 STDOUT terraform:  } 2025-04-05 11:39:30.024653 | orchestrator | 11:39:30.024 STDOUT terraform:  + binding (known after apply) 2025-04-05 11:39:30.024661 | orchestrator | 11:39:30.024 STDOUT terraform:  + fixed_ip { 2025-04-05 11:39:30.024690 | orchestrator | 11:39:30.024 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-04-05 11:39:30.024722 | orchestrator | 11:39:30.024 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-05 11:39:30.024728 | orchestrator | 11:39:30.024 STDOUT terraform:  } 2025-04-05 11:39:30.024735 | orchestrator | 11:39:30.024 STDOUT terraform:  } 2025-04-05 11:39:30.024792 | orchestrator | 11:39:30.024 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-04-05 11:39:30.024841 | orchestrator | 11:39:30.024 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-05 11:39:30.024879 | orchestrator | 11:39:30.024 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-05 11:39:30.024917 | orchestrator | 11:39:30.024 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-05 11:39:30.024955 | orchestrator | 11:39:30.024 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-05 11:39:30.024995 | orchestrator | 11:39:30.024 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.025032 | orchestrator | 11:39:30.024 STDOUT terraform:  + device_id = (known after apply) 2025-04-05 11:39:30.025071 | orchestrator | 11:39:30.025 STDOUT terraform:  + device_owner = (known after apply) 2025-04-05 11:39:30.025108 | orchestrator | 11:39:30.025 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-05 11:39:30.025146 | orchestrator | 11:39:30.025 STDOUT terraform:  + dns_name = (known after apply) 2025-04-05 11:39:30.025184 | orchestrator | 11:39:30.025 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.025236 | orchestrator | 11:39:30.025 STDOUT terraform:  + mac_address = (known after apply) 2025-04-05 11:39:30.025272 | orchestrator | 11:39:30.025 STDOUT terraform:  + network_id = (known after apply) 2025-04-05 11:39:30.025309 | orchestrator | 11:39:30.025 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-05 11:39:30.025347 | orchestrator | 11:39:30.025 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-05 11:39:30.025386 | orchestrator | 11:39:30.025 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.025423 | orchestrator | 11:39:30.025 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-05 11:39:30.025461 | orchestrator | 11:39:30.025 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.025479 | orchestrator | 11:39:30.025 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.025509 | orchestrator | 11:39:30.025 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-05 11:39:30.025516 | orchestrator | 11:39:30.025 STDOUT terraform:  } 2025-04-05 11:39:30.025540 | orchestrator | 11:39:30.025 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.025571 | orchestrator | 11:39:30.025 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-05 11:39:30.025578 | orchestrator | 11:39:30.025 STDOUT terraform:  } 2025-04-05 11:39:30.025602 | orchestrator | 11:39:30.025 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.025632 | orchestrator | 11:39:30.025 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-05 11:39:30.025639 | orchestrator | 11:39:30.025 STDOUT terraform:  } 2025-04-05 11:39:30.025663 | orchestrator | 11:39:30.025 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.025693 | orchestrator | 11:39:30.025 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-05 11:39:30.025700 | orchestrator | 11:39:30.025 STDOUT terraform:  } 2025-04-05 11:39:30.025728 | orchestrator | 11:39:30.025 STDOUT terraform:  + binding (known after apply) 2025-04-05 11:39:30.025734 | orchestrator | 11:39:30.025 STDOUT terraform:  + fixed_ip { 2025-04-05 11:39:30.025764 | orchestrator | 11:39:30.025 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-04-05 11:39:30.025794 | orchestrator | 11:39:30.025 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-05 11:39:30.025801 | orchestrator | 11:39:30.025 STDOUT terraform:  } 2025-04-05 11:39:30.025808 | orchestrator | 11:39:30.025 STDOUT terraform:  } 2025-04-05 11:39:30.025861 | orchestrator | 11:39:30.025 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-04-05 11:39:30.025907 | orchestrator | 11:39:30.025 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-05 11:39:30.025947 | orchestrator | 11:39:30.025 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-05 11:39:30.025985 | orchestrator | 11:39:30.025 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-05 11:39:30.026025 | orchestrator | 11:39:30.025 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-05 11:39:30.026080 | orchestrator | 11:39:30.026 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.026117 | orchestrator | 11:39:30.026 STDOUT terraform:  + device_id = (known after apply) 2025-04-05 11:39:30.026154 | orchestrator | 11:39:30.026 STDOUT terraform:  + device_owner = (known after apply) 2025-04-05 11:39:30.026193 | orchestrator | 11:39:30.026 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-05 11:39:30.026242 | orchestrator | 11:39:30.026 STDOUT terraform:  + dns_name = (known after apply) 2025-04-05 11:39:30.026281 | orchestrator | 11:39:30.026 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.026318 | orchestrator | 11:39:30.026 STDOUT terraform:  + mac_address = (known after apply) 2025-04-05 11:39:30.026357 | orchestrator | 11:39:30.026 STDOUT terraform:  + network_id = (known after apply) 2025-04-05 11:39:30.026397 | orchestrator | 11:39:30.026 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-05 11:39:30.026435 | orchestrator | 11:39:30.026 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-05 11:39:30.026477 | orchestrator | 11:39:30.026 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.026512 | orchestrator | 11:39:30.026 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-05 11:39:30.026550 | orchestrator | 11:39:30.026 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.026568 | orchestrator | 11:39:30.026 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.026598 | orchestrator | 11:39:30.026 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-05 11:39:30.026605 | orchestrator | 11:39:30.026 STDOUT terraform:  } 2025-04-05 11:39:30.026630 | orchestrator | 11:39:30.026 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.026661 | orchestrator | 11:39:30.026 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-05 11:39:30.026668 | orchestrator | 11:39:30.026 STDOUT terraform:  } 2025-04-05 11:39:30.026693 | orchestrator | 11:39:30.026 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.026722 | orchestrator | 11:39:30.026 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-05 11:39:30.026729 | orchestrator | 11:39:30.026 STDOUT terraform:  } 2025-04-05 11:39:30.026754 | orchestrator | 11:39:30.026 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.026784 | orchestrator | 11:39:30.026 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-05 11:39:30.026791 | orchestrator | 11:39:30.026 STDOUT terraform:  } 2025-04-05 11:39:30.026820 | orchestrator | 11:39:30.026 STDOUT terraform:  + binding (known after apply) 2025-04-05 11:39:30.026827 | orchestrator | 11:39:30.026 STDOUT terraform:  + fixed_ip { 2025-04-05 11:39:30.026858 | orchestrator | 11:39:30.026 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-04-05 11:39:30.026882 | orchestrator | 11:39:30.026 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-05 11:39:30.026889 | orchestrator | 11:39:30.026 STDOUT terraform:  } 2025-04-05 11:39:30.026907 | orchestrator | 11:39:30.026 STDOUT terraform:  } 2025-04-05 11:39:30.026953 | orchestrator | 11:39:30.026 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-04-05 11:39:30.027000 | orchestrator | 11:39:30.026 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-05 11:39:30.027038 | orchestrator | 11:39:30.026 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-05 11:39:30.027076 | orchestrator | 11:39:30.027 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-05 11:39:30.027114 | orchestrator | 11:39:30.027 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-05 11:39:30.027153 | orchestrator | 11:39:30.027 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.027191 | orchestrator | 11:39:30.027 STDOUT terraform:  + device_id = (known after apply) 2025-04-05 11:39:30.027422 | orchestrator | 11:39:30.027 STDOUT terraform:  + device_owner = (known after apply) 2025-04-05 11:39:30.027503 | orchestrator | 11:39:30.027 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-05 11:39:30.027522 | orchestrator | 11:39:30.027 STDOUT terraform:  + dns_name = (known after apply) 2025-04-05 11:39:30.027536 | orchestrator | 11:39:30.027 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.027550 | orchestrator | 11:39:30.027 STDOUT terraform:  + mac_address = (known after apply) 2025-04-05 11:39:30.027570 | orchestrator | 11:39:30.027 STDOUT terraform:  + network_id = (known after apply) 2025-04-05 11:39:30.027585 | orchestrator | 11:39:30.027 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-05 11:39:30.027599 | orchestrator | 11:39:30.027 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-05 11:39:30.027613 | orchestrator | 11:39:30.027 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.027627 | orchestrator | 11:39:30.027 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-05 11:39:30.027645 | orchestrator | 11:39:30.027 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.027659 | orchestrator | 11:39:30.027 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.027674 | orchestrator | 11:39:30.027 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-05 11:39:30.027689 | orchestrator | 11:39:30.027 STDOUT terraform:  } 2025-04-05 11:39:30.027707 | orchestrator | 11:39:30.027 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.027739 | orchestrator | 11:39:30.027 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-05 11:39:30.027754 | orchestrator | 11:39:30.027 STDOUT terraform:  } 2025-04-05 11:39:30.027783 | orchestrator | 11:39:30.027 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.027802 | orchestrator | 11:39:30.027 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-05 11:39:30.027816 | orchestrator | 11:39:30.027 STDOUT terraform:  } 2025-04-05 11:39:30.027846 | orchestrator | 11:39:30.027 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.027861 | orchestrator | 11:39:30.027 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-05 11:39:30.027876 | orchestrator | 11:39:30.027 STDOUT terraform:  } 2025-04-05 11:39:30.027894 | orchestrator | 11:39:30.027 STDOUT terraform:  + binding (known after apply) 2025-04-05 11:39:30.027909 | orchestrator | 11:39:30.027 STDOUT terraform:  + fixed_ip { 2025-04-05 11:39:30.027922 | orchestrator | 11:39:30.027 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-04-05 11:39:30.027936 | orchestrator | 11:39:30.027 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-05 11:39:30.027950 | orchestrator | 11:39:30.027 STDOUT terraform:  } 2025-04-05 11:39:30.027968 | orchestrator | 11:39:30.027 STDOUT terraform:  } 2025-04-05 11:39:30.027998 | orchestrator | 11:39:30.027 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-04-05 11:39:30.028017 | orchestrator | 11:39:30.027 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-05 11:39:30.028034 | orchestrator | 11:39:30.027 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-05 11:39:30.028086 | orchestrator | 11:39:30.028 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-05 11:39:30.028105 | orchestrator | 11:39:30.028 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-05 11:39:30.028147 | orchestrator | 11:39:30.028 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.028165 | orchestrator | 11:39:30.028 STDOUT terraform:  + device_id = (known after apply) 2025-04-05 11:39:30.028251 | orchestrator | 11:39:30.028 STDOUT terraform:  + device_owner = (known after apply) 2025-04-05 11:39:30.028294 | orchestrator | 11:39:30.028 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-05 11:39:30.028318 | orchestrator | 11:39:30.028 STDOUT terraform:  + dns_name = (known after apply) 2025-04-05 11:39:30.028359 | orchestrator | 11:39:30.028 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.028377 | orchestrator | 11:39:30.028 STDOUT terraform:  + mac_address = (known after apply) 2025-04-05 11:39:30.028394 | orchestrator | 11:39:30.028 STDOUT terraform:  + network_id = (known after apply) 2025-04-05 11:39:30.028435 | orchestrator | 11:39:30.028 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-05 11:39:30.028453 | orchestrator | 11:39:30.028 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-05 11:39:30.028508 | orchestrator | 11:39:30.028 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.028527 | orchestrator | 11:39:30.028 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-05 11:39:30.028581 | orchestrator | 11:39:30.028 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.028621 | orchestrator | 11:39:30.028 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.028640 | orchestrator | 11:39:30.028 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-05 11:39:30.028700 | orchestrator | 11:39:30.028 STDOUT terraform:  } 2025-04-05 11:39:30.028732 | orchestrator | 11:39:30.028 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.028751 | orchestrator | 11:39:30.028 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-05 11:39:30.028765 | orchestrator | 11:39:30.028 STDOUT terraform:  } 2025-04-05 11:39:30.028779 | orchestrator | 11:39:30.028 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.028793 | orchestrator | 11:39:30.028 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-05 11:39:30.028807 | orchestrator | 11:39:30.028 STDOUT terraform:  } 2025-04-05 11:39:30.028821 | orchestrator | 11:39:30.028 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.028839 | orchestrator | 11:39:30.028 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-05 11:39:30.028881 | orchestrator | 11:39:30.028 STDOUT terraform:  } 2025-04-05 11:39:30.028896 | orchestrator | 11:39:30.028 STDOUT terraform:  + binding (known after apply) 2025-04-05 11:39:30.028911 | orchestrator | 11:39:30.028 STDOUT terraform:  + fixed_ip { 2025-04-05 11:39:30.028925 | orchestrator | 11:39:30.028 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-04-05 11:39:30.028943 | orchestrator | 11:39:30.028 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-05 11:39:30.028981 | orchestrator | 11:39:30.028 STDOUT terraform:  } 2025-04-05 11:39:30.028996 | orchestrator | 11:39:30.028 STDOUT terraform:  } 2025-04-05 11:39:30.029010 | orchestrator | 11:39:30.028 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-04-05 11:39:30.029028 | orchestrator | 11:39:30.028 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-05 11:39:30.029042 | orchestrator | 11:39:30.028 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-05 11:39:30.029087 | orchestrator | 11:39:30.028 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-05 11:39:30.029106 | orchestrator | 11:39:30.029 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-05 11:39:30.029148 | orchestrator | 11:39:30.029 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.029166 | orchestrator | 11:39:30.029 STDOUT terraform:  + device_id = (known after apply) 2025-04-05 11:39:30.029183 | orchestrator | 11:39:30.029 STDOUT terraform:  + device_owner = (known after apply) 2025-04-05 11:39:30.029266 | orchestrator | 11:39:30.029 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-05 11:39:30.029318 | orchestrator | 11:39:30.029 STDOUT terraform:  + dns_name = (known after apply) 2025-04-05 11:39:30.029338 | orchestrator | 11:39:30.029 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.029403 | orchestrator | 11:39:30.029 STDOUT terraform:  + mac_address = (known after apply) 2025-04-05 11:39:30.029423 | orchestrator | 11:39:30.029 STDOUT terraform:  + network_id = (known after apply) 2025-04-05 11:39:30.029438 | orchestrator | 11:39:30.029 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-05 11:39:30.029455 | orchestrator | 11:39:30.029 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-05 11:39:30.029486 | orchestrator | 11:39:30.029 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.029505 | orchestrator | 11:39:30.029 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-05 11:39:30.029552 | orchestrator | 11:39:30.029 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.029576 | orchestrator | 11:39:30.029 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.029600 | orchestrator | 11:39:30.029 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-05 11:39:30.033467 | orchestrator | 11:39:30.029 STDOUT terraform:  } 2025-04-05 11:39:30.033530 | orchestrator | 11:39:30.029 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.033558 | orchestrator | 11:39:30.029 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-05 11:39:30.033572 | orchestrator | 11:39:30.029 STDOUT terraform:  } 2025-04-05 11:39:30.033585 | orchestrator | 11:39:30.029 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.033597 | orchestrator | 11:39:30.029 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-05 11:39:30.033610 | orchestrator | 11:39:30.029 STDOUT terraform:  } 2025-04-05 11:39:30.033622 | orchestrator | 11:39:30.029 STDOUT terraform:  + allowed_address_pairs { 2025-04-05 11:39:30.033635 | orchestrator | 11:39:30.029 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-05 11:39:30.033647 | orchestrator | 11:39:30.029 STDOUT terraform:  } 2025-04-05 11:39:30.033660 | orchestrator | 11:39:30.029 STDOUT terraform:  + binding (known after apply) 2025-04-05 11:39:30.033672 | orchestrator | 11:39:30.029 STDOUT terraform:  + fixed_ip { 2025-04-05 11:39:30.033685 | orchestrator | 11:39:30.029 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-04-05 11:39:30.033706 | orchestrator | 11:39:30.029 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-05 11:39:30.033719 | orchestrator | 11:39:30.029 STDOUT terraform:  } 2025-04-05 11:39:30.033731 | orchestrator | 11:39:30.029 STDOUT terraform:  } 2025-04-05 11:39:30.033744 | orchestrator | 11:39:30.029 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-04-05 11:39:30.033762 | orchestrator | 11:39:30.029 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-04-05 11:39:30.033774 | orchestrator | 11:39:30.029 STDOUT terraform:  + force_destroy = false 2025-04-05 11:39:30.033787 | orchestrator | 11:39:30.029 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.033799 | orchestrator | 11:39:30.029 STDOUT terraform:  + port_id = (known after apply) 2025-04-05 11:39:30.033812 | orchestrator | 11:39:30.029 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.033824 | orchestrator | 11:39:30.030 STDOUT terraform:  + router_id = (known after apply) 2025-04-05 11:39:30.033837 | orchestrator | 11:39:30.030 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-05 11:39:30.033849 | orchestrator | 11:39:30.030 STDOUT terraform:  } 2025-04-05 11:39:30.033862 | orchestrator | 11:39:30.030 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-04-05 11:39:30.033875 | orchestrator | 11:39:30.030 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-04-05 11:39:30.033901 | orchestrator | 11:39:30.030 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-05 11:39:30.033914 | orchestrator | 11:39:30.030 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.033926 | orchestrator | 11:39:30.030 STDOUT terraform:  + availability_zone_hints = [ 2025-04-05 11:39:30.033939 | orchestrator | 11:39:30.030 STDOUT terraform:  + "nova", 2025-04-05 11:39:30.033952 | orchestrator | 11:39:30.030 STDOUT terraform:  ] 2025-04-05 11:39:30.033964 | orchestrator | 11:39:30.030 STDOUT terraform:  + distributed = (known after apply) 2025-04-05 11:39:30.033977 | orchestrator | 11:39:30.030 STDOUT terraform:  + enable_snat = (known after apply) 2025-04-05 11:39:30.033989 | orchestrator | 11:39:30.030 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-04-05 11:39:30.034002 | orchestrator | 11:39:30.030 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.034045 | orchestrator | 11:39:30.030 STDOUT terraform:  + name = "testbed" 2025-04-05 11:39:30.034060 | orchestrator | 11:39:30.030 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.034073 | orchestrator | 11:39:30.030 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.034085 | orchestrator | 11:39:30.030 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-04-05 11:39:30.034098 | orchestrator | 11:39:30.030 STDOUT terraform:  } 2025-04-05 11:39:30.034118 | orchestrator | 11:39:30.030 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-04-05 11:39:30.034133 | orchestrator | 11:39:30.030 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-04-05 11:39:30.034145 | orchestrator | 11:39:30.030 STDOUT terraform:  + description = "ssh" 2025-04-05 11:39:30.034158 | orchestrator | 11:39:30.030 STDOUT terraform:  + direction = "ingress" 2025-04-05 11:39:30.034171 | orchestrator | 11:39:30.030 STDOUT terraform:  + ethertype = "IPv4" 2025-04-05 11:39:30.034183 | orchestrator | 11:39:30.030 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.034196 | orchestrator | 11:39:30.030 STDOUT terraform:  + port_range_max = 22 2025-04-05 11:39:30.034209 | orchestrator | 11:39:30.030 STDOUT terraform:  + port_range_min = 22 2025-04-05 11:39:30.034286 | orchestrator | 11:39:30.030 STDOUT terraform:  + protocol = "tcp" 2025-04-05 11:39:30.034300 | orchestrator | 11:39:30.030 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.034313 | orchestrator | 11:39:30.030 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-05 11:39:30.034326 | orchestrator | 11:39:30.030 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-05 11:39:30.034338 | orchestrator | 11:39:30.030 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-05 11:39:30.034351 | orchestrator | 11:39:30.030 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.034362 | orchestrator | 11:39:30.030 STDOUT terraform:  } 2025-04-05 11:39:30.034373 | orchestrator | 11:39:30.030 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-04-05 11:39:30.034391 | orchestrator | 11:39:30.031 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-04-05 11:39:30.034401 | orchestrator | 11:39:30.031 STDOUT terraform:  + description = "wireguard" 2025-04-05 11:39:30.034411 | orchestrator | 11:39:30.031 STDOUT terraform:  + direction = "ingress" 2025-04-05 11:39:30.034421 | orchestrator | 11:39:30.031 STDOUT terraform:  + ethertype = "IPv4" 2025-04-05 11:39:30.034432 | orchestrator | 11:39:30.031 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.034442 | orchestrator | 11:39:30.031 STDOUT terraform:  + port_range_max = 51820 2025-04-05 11:39:30.034453 | orchestrator | 11:39:30.031 STDOUT terraform:  + port_range_min = 51820 2025-04-05 11:39:30.034463 | orchestrator | 11:39:30.031 STDOUT terraform:  + protocol = "udp" 2025-04-05 11:39:30.034474 | orchestrator | 11:39:30.031 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.034484 | orchestrator | 11:39:30.031 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-05 11:39:30.034494 | orchestrator | 11:39:30.031 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-05 11:39:30.034505 | orchestrator | 11:39:30.031 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-05 11:39:30.034515 | orchestrator | 11:39:30.031 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.034525 | orchestrator | 11:39:30.031 STDOUT terraform:  } 2025-04-05 11:39:30.034536 | orchestrator | 11:39:30.031 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-04-05 11:39:30.034546 | orchestrator | 11:39:30.031 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-04-05 11:39:30.034557 | orchestrator | 11:39:30.031 STDOUT terraform:  + direction = "ingress" 2025-04-05 11:39:30.034604 | orchestrator | 11:39:30.031 STDOUT terraform:  + ethertype = "IPv4" 2025-04-05 11:39:30.034614 | orchestrator | 11:39:30.031 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.034625 | orchestrator | 11:39:30.031 STDOUT terraform:  + protocol = "tcp" 2025-04-05 11:39:30.034635 | orchestrator | 11:39:30.031 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.034650 | orchestrator | 11:39:30.031 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-05 11:39:30.034661 | orchestrator | 11:39:30.031 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-04-05 11:39:30.034671 | orchestrator | 11:39:30.031 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-05 11:39:30.034681 | orchestrator | 11:39:30.031 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.034691 | orchestrator | 11:39:30.031 STDOUT terraform:  } 2025-04-05 11:39:30.034702 | orchestrator | 11:39:30.031 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-04-05 11:39:30.034712 | orchestrator | 11:39:30.031 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-04-05 11:39:30.034728 | orchestrator | 11:39:30.031 STDOUT terraform:  + direction = "ingress" 2025-04-05 11:39:30.034738 | orchestrator | 11:39:30.031 STDOUT terraform:  + ethertype = "IPv4" 2025-04-05 11:39:30.034748 | orchestrator | 11:39:30.031 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.034759 | orchestrator | 11:39:30.031 STDOUT terraform:  + protocol = "udp" 2025-04-05 11:39:30.034773 | orchestrator | 11:39:30.031 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.034784 | orchestrator | 11:39:30.031 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-05 11:39:30.034801 | orchestrator | 11:39:30.031 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-04-05 11:39:30.034812 | orchestrator | 11:39:30.031 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-05 11:39:30.034823 | orchestrator | 11:39:30.031 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.034833 | orchestrator | 11:39:30.032 STDOUT terraform:  } 2025-04-05 11:39:30.034843 | orchestrator | 11:39:30.032 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-04-05 11:39:30.034854 | orchestrator | 11:39:30.032 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-04-05 11:39:30.034869 | orchestrator | 11:39:30.032 STDOUT terraform:  + direction = "ingress" 2025-04-05 11:39:30.034879 | orchestrator | 11:39:30.032 STDOUT terraform:  + ethertype = "IPv4" 2025-04-05 11:39:30.034890 | orchestrator | 11:39:30.032 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.034900 | orchestrator | 11:39:30.032 STDOUT terraform:  + protocol = "icmp" 2025-04-05 11:39:30.034911 | orchestrator | 11:39:30.032 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.034921 | orchestrator | 11:39:30.032 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-05 11:39:30.034931 | orchestrator | 11:39:30.032 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-05 11:39:30.034941 | orchestrator | 11:39:30.032 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-05 11:39:30.034952 | orchestrator | 11:39:30.032 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.034962 | orchestrator | 11:39:30.032 STDOUT terraform:  } 2025-04-05 11:39:30.034972 | orchestrator | 11:39:30.032 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-04-05 11:39:30.034983 | orchestrator | 11:39:30.032 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-04-05 11:39:30.034995 | orchestrator | 11:39:30.032 STDOUT terraform:  + direction = "ingress" 2025-04-05 11:39:30.035005 | orchestrator | 11:39:30.032 STDOUT terraform:  + ethertype = "IPv4" 2025-04-05 11:39:30.035015 | orchestrator | 11:39:30.032 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.035025 | orchestrator | 11:39:30.032 STDOUT terraform:  + protocol = "tcp" 2025-04-05 11:39:30.035036 | orchestrator | 11:39:30.032 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.035046 | orchestrator | 11:39:30.032 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-05 11:39:30.035068 | orchestrator | 11:39:30.032 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-05 11:39:30.035079 | orchestrator | 11:39:30.032 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-05 11:39:30.035089 | orchestrator | 11:39:30.032 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.035100 | orchestrator | 11:39:30.032 STDOUT terraform:  } 2025-04-05 11:39:30.035110 | orchestrator | 11:39:30.032 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-04-05 11:39:30.035120 | orchestrator | 11:39:30.032 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-04-05 11:39:30.035131 | orchestrator | 11:39:30.032 STDOUT terraform:  + direction = "ingress" 2025-04-05 11:39:30.035141 | orchestrator | 11:39:30.032 STDOUT terraform:  + ethertype = "IPv4" 2025-04-05 11:39:30.035151 | orchestrator | 11:39:30.032 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.035162 | orchestrator | 11:39:30.032 STDOUT terraform:  + protocol = "udp" 2025-04-05 11:39:30.035172 | orchestrator | 11:39:30.032 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.035182 | orchestrator | 11:39:30.032 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-05 11:39:30.035192 | orchestrator | 11:39:30.032 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-05 11:39:30.035202 | orchestrator | 11:39:30.032 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-05 11:39:30.035212 | orchestrator | 11:39:30.032 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.035237 | orchestrator | 11:39:30.032 STDOUT terraform:  } 2025-04-05 11:39:30.035247 | orchestrator | 11:39:30.033 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-04-05 11:39:30.035258 | orchestrator | 11:39:30.033 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-04-05 11:39:30.035268 | orchestrator | 11:39:30.033 STDOUT terraform:  + direction = "ingress" 2025-04-05 11:39:30.035278 | orchestrator | 11:39:30.033 STDOUT terraform:  + ethertype = "IPv4" 2025-04-05 11:39:30.035288 | orchestrator | 11:39:30.033 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.035298 | orchestrator | 11:39:30.033 STDOUT terraform:  + protocol = "icmp" 2025-04-05 11:39:30.035308 | orchestrator | 11:39:30.033 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.035318 | orchestrator | 11:39:30.033 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-05 11:39:30.035328 | orchestrator | 11:39:30.033 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-05 11:39:30.035338 | orchestrator | 11:39:30.033 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-05 11:39:30.035348 | orchestrator | 11:39:30.033 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.035357 | orchestrator | 11:39:30.033 STDOUT terraform:  } 2025-04-05 11:39:30.035367 | orchestrator | 11:39:30.033 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-04-05 11:39:30.035382 | orchestrator | 11:39:30.033 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-04-05 11:39:30.035393 | orchestrator | 11:39:30.033 STDOUT terraform:  + description = "vrrp" 2025-04-05 11:39:30.035403 | orchestrator | 11:39:30.033 STDOUT terraform:  + direction = "ingress" 2025-04-05 11:39:30.035413 | orchestrator | 11:39:30.033 STDOUT terraform:  + ethertype = "IPv4" 2025-04-05 11:39:30.035423 | orchestrator | 11:39:30.033 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.035432 | orchestrator | 11:39:30.033 STDOUT terraform:  + protocol = "112" 2025-04-05 11:39:30.035443 | orchestrator | 11:39:30.033 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.035456 | orchestrator | 11:39:30.033 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-05 11:39:30.035472 | orchestrator | 11:39:30.033 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-05 11:39:30.035482 | orchestrator | 11:39:30.033 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-05 11:39:30.035492 | orchestrator | 11:39:30.033 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.035502 | orchestrator | 11:39:30.033 STDOUT terraform:  } 2025-04-05 11:39:30.035512 | orchestrator | 11:39:30.033 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-04-05 11:39:30.035522 | orchestrator | 11:39:30.033 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-04-05 11:39:30.035532 | orchestrator | 11:39:30.033 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.035542 | orchestrator | 11:39:30.033 STDOUT terraform:  + description = "management security group" 2025-04-05 11:39:30.035552 | orchestrator | 11:39:30.033 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.035562 | orchestrator | 11:39:30.033 STDOUT terraform:  + name = "testbed-management" 2025-04-05 11:39:30.035572 | orchestrator | 11:39:30.033 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.035581 | orchestrator | 11:39:30.033 STDOUT terraform:  + stateful = (known after apply) 2025-04-05 11:39:30.035591 | orchestrator | 11:39:30.033 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.035601 | orchestrator | 11:39:30.033 STDOUT terraform:  } 2025-04-05 11:39:30.035616 | orchestrator | 11:39:30.033 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-04-05 11:39:30.035626 | orchestrator | 11:39:30.034 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-04-05 11:39:30.035636 | orchestrator | 11:39:30.034 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.035646 | orchestrator | 11:39:30.034 STDOUT terraform:  + description = "node security group" 2025-04-05 11:39:30.035656 | orchestrator | 11:39:30.034 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.035666 | orchestrator | 11:39:30.034 STDOUT terraform:  + name = "testbed-node" 2025-04-05 11:39:30.035676 | orchestrator | 11:39:30.034 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.035686 | orchestrator | 11:39:30.034 STDOUT terraform:  + stateful = (known after apply) 2025-04-05 11:39:30.035701 | orchestrator | 11:39:30.034 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.035711 | orchestrator | 11:39:30.034 STDOUT terraform:  } 2025-04-05 11:39:30.035721 | orchestrator | 11:39:30.034 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-04-05 11:39:30.035731 | orchestrator | 11:39:30.034 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-04-05 11:39:30.035741 | orchestrator | 11:39:30.034 STDOUT terraform:  + all_tags = (known after apply) 2025-04-05 11:39:30.035751 | orchestrator | 11:39:30.034 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-04-05 11:39:30.035761 | orchestrator | 11:39:30.034 STDOUT terraform:  + dns_nameservers = [ 2025-04-05 11:39:30.035772 | orchestrator | 11:39:30.034 STDOUT terraform:  + "8.8.8.8", 2025-04-05 11:39:30.035782 | orchestrator | 11:39:30.034 STDOUT terraform:  + "9.9.9.9", 2025-04-05 11:39:30.035792 | orchestrator | 11:39:30.034 STDOUT terraform:  ] 2025-04-05 11:39:30.035802 | orchestrator | 11:39:30.034 STDOUT terraform:  + enable_dhcp = true 2025-04-05 11:39:30.035812 | orchestrator | 11:39:30.034 STDOUT terraform:  + gateway_ip = (known after apply) 2025-04-05 11:39:30.035822 | orchestrator | 11:39:30.034 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.035832 | orchestrator | 11:39:30.034 STDOUT terraform:  + ip_version = 4 2025-04-05 11:39:30.035842 | orchestrator | 11:39:30.034 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-04-05 11:39:30.035852 | orchestrator | 11:39:30.034 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-04-05 11:39:30.035866 | orchestrator | 11:39:30.034 STDOUT terraform:  + name = "subnet-testbed-management" 2025-04-05 11:39:30.208373 | orchestrator | 11:39:30.034 STDOUT terraform:  + network_id = (known after apply) 2025-04-05 11:39:30.208451 | orchestrator | 11:39:30.034 STDOUT terraform:  + no_gateway = false 2025-04-05 11:39:30.208460 | orchestrator | 11:39:30.034 STDOUT terraform:  + region = (known after apply) 2025-04-05 11:39:30.208467 | orchestrator | 11:39:30.034 STDOUT terraform:  + service_types = (known after apply) 2025-04-05 11:39:30.208474 | orchestrator | 11:39:30.034 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-05 11:39:30.208481 | orchestrator | 11:39:30.034 STDOUT terraform:  + allocation_pool { 2025-04-05 11:39:30.208488 | orchestrator | 11:39:30.034 STDOUT terraform:  + end = "192.168.31.250" 2025-04-05 11:39:30.208495 | orchestrator | 11:39:30.034 STDOUT terraform:  + start = "192.168.31.200" 2025-04-05 11:39:30.208501 | orchestrator | 11:39:30.034 STDOUT terraform:  } 2025-04-05 11:39:30.208508 | orchestrator | 11:39:30.034 STDOUT terraform:  } 2025-04-05 11:39:30.208515 | orchestrator | 11:39:30.034 STDOUT terraform:  # terraform_data.image will be created 2025-04-05 11:39:30.208521 | orchestrator | 11:39:30.034 STDOUT terraform:  + resource "terraform_data" "image" { 2025-04-05 11:39:30.208527 | orchestrator | 11:39:30.034 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.208533 | orchestrator | 11:39:30.034 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-04-05 11:39:30.208553 | orchestrator | 11:39:30.034 STDOUT terraform:  + output = (known after apply) 2025-04-05 11:39:30.208560 | orchestrator | 11:39:30.034 STDOUT terraform:  } 2025-04-05 11:39:30.208567 | orchestrator | 11:39:30.034 STDOUT terraform:  # terraform_data.image_node will be created 2025-04-05 11:39:30.208573 | orchestrator | 11:39:30.035 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-04-05 11:39:30.208579 | orchestrator | 11:39:30.035 STDOUT terraform:  + id = (known after apply) 2025-04-05 11:39:30.208585 | orchestrator | 11:39:30.035 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-04-05 11:39:30.208591 | orchestrator | 11:39:30.035 STDOUT terraform:  + output = (known after apply) 2025-04-05 11:39:30.208597 | orchestrator | 11:39:30.035 STDOUT terraform:  } 2025-04-05 11:39:30.208604 | orchestrator | 11:39:30.035 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-04-05 11:39:30.208610 | orchestrator | 11:39:30.035 STDOUT terraform: Changes to Outputs: 2025-04-05 11:39:30.208616 | orchestrator | 11:39:30.035 STDOUT terraform:  + manager_address = (sensitive value) 2025-04-05 11:39:30.208623 | orchestrator | 11:39:30.035 STDOUT terraform:  + private_key = (sensitive value) 2025-04-05 11:39:30.208640 | orchestrator | 11:39:30.207 STDOUT terraform: terraform_data.image: Creating... 2025-04-05 11:39:30.222118 | orchestrator | 11:39:30.207 STDOUT terraform: terraform_data.image_node: Creating... 2025-04-05 11:39:30.222176 | orchestrator | 11:39:30.207 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=7b2f48b7-53c4-f0de-4fe6-538e5df0b5f8] 2025-04-05 11:39:30.222185 | orchestrator | 11:39:30.207 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=d3a2fe12-f0be-d8f3-1bb1-1c961c4a2354] 2025-04-05 11:39:30.222198 | orchestrator | 11:39:30.221 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-04-05 11:39:30.230386 | orchestrator | 11:39:30.230 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-04-05 11:39:30.230479 | orchestrator | 11:39:30.230 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-04-05 11:39:30.231253 | orchestrator | 11:39:30.230 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-04-05 11:39:30.232957 | orchestrator | 11:39:30.232 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-04-05 11:39:30.236666 | orchestrator | 11:39:30.234 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-04-05 11:39:30.238472 | orchestrator | 11:39:30.234 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-04-05 11:39:30.238517 | orchestrator | 11:39:30.236 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-04-05 11:39:30.238529 | orchestrator | 11:39:30.238 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-04-05 11:39:30.239474 | orchestrator | 11:39:30.239 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-04-05 11:39:30.665077 | orchestrator | 11:39:30.663 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-04-05 11:39:30.668306 | orchestrator | 11:39:30.668 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-04-05 11:39:36.106890 | orchestrator | 11:39:36.106 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=c2b6c8dd-0150-4724-b231-405c6a6a88b3] 2025-04-05 11:39:36.114239 | orchestrator | 11:39:36.113 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-04-05 11:39:40.232307 | orchestrator | 11:39:40.231 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-04-05 11:39:40.232433 | orchestrator | 11:39:40.232 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-04-05 11:39:40.235346 | orchestrator | 11:39:40.235 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-04-05 11:39:40.237558 | orchestrator | 11:39:40.237 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-04-05 11:39:40.237748 | orchestrator | 11:39:40.237 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-04-05 11:39:40.239834 | orchestrator | 11:39:40.239 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-04-05 11:39:40.240960 | orchestrator | 11:39:40.240 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-04-05 11:39:40.241110 | orchestrator | 11:39:40.240 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-04-05 11:39:40.670897 | orchestrator | 11:39:40.670 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-04-05 11:39:40.842057 | orchestrator | 11:39:40.841 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=ef34b75a-3d34-4e78-9c2f-2912cb587233] 2025-04-05 11:39:40.846607 | orchestrator | 11:39:40.846 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-04-05 11:39:40.865951 | orchestrator | 11:39:40.865 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=2883817b-319c-4609-b3d8-ef6d07bb9413] 2025-04-05 11:39:40.871489 | orchestrator | 11:39:40.871 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-04-05 11:39:40.877585 | orchestrator | 11:39:40.877 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 11s [id=cfed707b-504f-4ce7-a138-034721a1d783] 2025-04-05 11:39:40.884518 | orchestrator | 11:39:40.884 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-04-05 11:39:40.889490 | orchestrator | 11:39:40.889 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=ba8d5f0c-914f-4739-9d89-312c5c9b23ff] 2025-04-05 11:39:40.899029 | orchestrator | 11:39:40.898 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-04-05 11:39:40.908117 | orchestrator | 11:39:40.907 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 11s [id=ff9999ad-bea3-493e-9af1-c705049c2ab2] 2025-04-05 11:39:40.912294 | orchestrator | 11:39:40.912 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-04-05 11:39:40.916611 | orchestrator | 11:39:40.916 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 11s [id=4656da48-57a2-4eb8-982a-d76718d1cb02] 2025-04-05 11:39:40.920914 | orchestrator | 11:39:40.920 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-04-05 11:39:40.935514 | orchestrator | 11:39:40.935 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 10s [id=5d2b1a52-3655-4f66-b4c6-42f0360176a6] 2025-04-05 11:39:40.942397 | orchestrator | 11:39:40.942 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-04-05 11:39:41.075094 | orchestrator | 11:39:41.074 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 11s [id=d933188b-11f3-4ea5-a96e-67e7dafb4be4] 2025-04-05 11:39:41.088115 | orchestrator | 11:39:41.087 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-04-05 11:39:41.995058 | orchestrator | 11:39:41.994 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 12s [id=213baff1-89a7-4ff7-8a44-f121feb76d26] 2025-04-05 11:39:42.000601 | orchestrator | 11:39:42.000 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-04-05 11:39:42.466648 | orchestrator | 11:39:42.466 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-04-05 11:39:42.470096 | orchestrator | 11:39:42.469 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-04-05 11:39:42.522976 | orchestrator | 11:39:42.522 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-04-05 11:39:42.533104 | orchestrator | 11:39:42.532 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-04-05 11:39:46.117251 | orchestrator | 11:39:46.116 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-04-05 11:39:46.277922 | orchestrator | 11:39:46.277 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=04fc0b0c-cc3e-463d-bd93-2065fe130691] 2025-04-05 11:39:46.287471 | orchestrator | 11:39:46.287 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-04-05 11:39:50.847902 | orchestrator | 11:39:50.847 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-04-05 11:39:50.871895 | orchestrator | 11:39:50.871 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-04-05 11:39:50.885104 | orchestrator | 11:39:50.884 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-04-05 11:39:50.900975 | orchestrator | 11:39:50.900 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-04-05 11:39:50.912957 | orchestrator | 11:39:50.912 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-04-05 11:39:50.922196 | orchestrator | 11:39:50.921 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-04-05 11:39:50.941402 | orchestrator | 11:39:50.941 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-04-05 11:39:51.017287 | orchestrator | 11:39:51.016 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 10s [id=1530be44-7738-4993-8ddf-f82dde1dd101] 2025-04-05 11:39:51.039297 | orchestrator | 11:39:51.039 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-04-05 11:39:51.044587 | orchestrator | 11:39:51.044 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=365799fdb494da99984e294b330f8700e3c48179] 2025-04-05 11:39:51.058826 | orchestrator | 11:39:51.058 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-04-05 11:39:51.071323 | orchestrator | 11:39:51.071 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=05f1e5c2-483d-4605-9e0a-4b755f2c5af8] 2025-04-05 11:39:51.073471 | orchestrator | 11:39:51.073 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=df502a29e53b4870f99ad4bf1adb9996c545026a] 2025-04-05 11:39:51.078824 | orchestrator | 11:39:51.078 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-04-05 11:39:51.079312 | orchestrator | 11:39:51.079 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-04-05 11:39:51.085934 | orchestrator | 11:39:51.085 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=d464d14f-d012-4c2b-ad7f-7584e12a8ff6] 2025-04-05 11:39:51.086582 | orchestrator | 11:39:51.086 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-04-05 11:39:51.092375 | orchestrator | 11:39:51.092 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-04-05 11:39:51.108748 | orchestrator | 11:39:51.108 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 10s [id=af9ec2c6-8790-4d7b-8704-1ac1d2bb5c9f] 2025-04-05 11:39:51.115480 | orchestrator | 11:39:51.115 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-04-05 11:39:51.127855 | orchestrator | 11:39:51.127 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=1b7be43a-8a0c-4734-8b26-2b6a058e961f] 2025-04-05 11:39:51.132441 | orchestrator | 11:39:51.132 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 10s [id=3e7610b8-96df-421c-b96f-4d1684d93a4c] 2025-04-05 11:39:51.136988 | orchestrator | 11:39:51.135 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-04-05 11:39:51.140080 | orchestrator | 11:39:51.139 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-04-05 11:39:51.143418 | orchestrator | 11:39:51.143 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=08ad3194-03e6-46c2-bf31-80971387f831] 2025-04-05 11:39:51.257361 | orchestrator | 11:39:51.256 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 10s [id=3319eb17-1f94-4384-b4eb-d4656240927c] 2025-04-05 11:39:51.960536 | orchestrator | 11:39:51.960 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=ee0a0851-44a4-45e9-a152-31b556597ce6] 2025-04-05 11:39:51.969157 | orchestrator | 11:39:51.968 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-04-05 11:39:52.534321 | orchestrator | 11:39:52.533 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-04-05 11:39:52.850389 | orchestrator | 11:39:52.849 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=b3838fc9-e09a-4571-a3af-aa8e1b3975db] 2025-04-05 11:39:58.666072 | orchestrator | 11:39:58.665 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=ebc4315f-0812-4601-bedc-060d51f935e2] 2025-04-05 11:39:58.672483 | orchestrator | 11:39:58.672 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-04-05 11:39:58.673919 | orchestrator | 11:39:58.673 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-04-05 11:39:58.677309 | orchestrator | 11:39:58.675 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-04-05 11:39:58.810338 | orchestrator | 11:39:58.809 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=97232bab-578f-4f56-8604-51984ce89b00] 2025-04-05 11:39:58.817780 | orchestrator | 11:39:58.817 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-04-05 11:39:58.823680 | orchestrator | 11:39:58.823 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-04-05 11:39:58.854187 | orchestrator | 11:39:58.853 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=4e0c7bbc-5ef8-45ee-8b57-48ce094fb460] 2025-04-05 11:39:58.861765 | orchestrator | 11:39:58.861 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-04-05 11:39:59.072413 | orchestrator | 11:39:59.071 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=b4eb6aca-238a-4bd9-9941-cce5cf79dad0] 2025-04-05 11:39:59.079942 | orchestrator | 11:39:59.079 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-04-05 11:39:59.332567 | orchestrator | 11:39:59.332 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=4244653a-3b68-4797-b925-825338ad63fb] 2025-04-05 11:39:59.339922 | orchestrator | 11:39:59.339 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-04-05 11:39:59.440328 | orchestrator | 11:39:59.439 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=147b94ef-b583-49ae-bd81-ab6df1489fc6] 2025-04-05 11:39:59.446603 | orchestrator | 11:39:59.446 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-04-05 11:39:59.558670 | orchestrator | 11:39:59.558 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=27c49a65-4e94-4f59-83bc-5942193813f0] 2025-04-05 11:39:59.567255 | orchestrator | 11:39:59.567 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-04-05 11:39:59.667308 | orchestrator | 11:39:59.666 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=336d7520-ae1f-434c-a1d3-a252cfd11584] 2025-04-05 11:39:59.672319 | orchestrator | 11:39:59.672 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-04-05 11:39:59.678976 | orchestrator | 11:39:59.678 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=eb3ee73c-a81a-461b-9b58-ca03c76757ca] 2025-04-05 11:39:59.691276 | orchestrator | 11:39:59.691 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-04-05 11:39:59.780690 | orchestrator | 11:39:59.780 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=eda1cb9e-2c93-43e5-8824-9afc398af4a4] 2025-04-05 11:39:59.803198 | orchestrator | 11:39:59.803 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-04-05 11:40:01.079828 | orchestrator | 11:40:01.079 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-04-05 11:40:01.080243 | orchestrator | 11:40:01.079 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-04-05 11:40:01.093781 | orchestrator | 11:40:01.093 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-04-05 11:40:01.116999 | orchestrator | 11:40:01.116 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-04-05 11:40:01.138437 | orchestrator | 11:40:01.138 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-04-05 11:40:01.142792 | orchestrator | 11:40:01.142 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-04-05 11:40:01.411771 | orchestrator | 11:40:01.411 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=e366d25b-af81-4f6a-8721-ed881c3a6b03] 2025-04-05 11:40:01.418486 | orchestrator | 11:40:01.418 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=fafe3624-8e2d-43c3-8528-5f1430e0c7df] 2025-04-05 11:40:01.426652 | orchestrator | 11:40:01.426 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-04-05 11:40:01.429972 | orchestrator | 11:40:01.429 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-04-05 11:40:01.480033 | orchestrator | 11:40:01.479 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04] 2025-04-05 11:40:01.483209 | orchestrator | 11:40:01.482 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=6df06a38-89f5-41f7-80a7-38daa8b90597] 2025-04-05 11:40:01.494052 | orchestrator | 11:40:01.493 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-04-05 11:40:01.495500 | orchestrator | 11:40:01.495 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=46a255ed-4eac-498b-800b-e13e0459e3b2] 2025-04-05 11:40:01.499723 | orchestrator | 11:40:01.499 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-04-05 11:40:01.502795 | orchestrator | 11:40:01.502 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-04-05 11:40:01.620862 | orchestrator | 11:40:01.620 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=9d8a9f41-75d1-4b36-9206-a005c55da2f8] 2025-04-05 11:40:01.631468 | orchestrator | 11:40:01.631 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-04-05 11:40:01.847366 | orchestrator | 11:40:01.846 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=02b30d98-f68b-49e7-bcf0-f6afebb00c24] 2025-04-05 11:40:02.121405 | orchestrator | 11:40:02.120 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=a3485950-3182-4145-b5a1-ad5c5b1bfb6f] 2025-04-05 11:40:04.499886 | orchestrator | 11:40:04.499 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 5s [id=f031c7f2-0166-4568-87f2-a5e2b1ac1181] 2025-04-05 11:40:05.250311 | orchestrator | 11:40:05.249 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 6s [id=fbd28308-a1cc-4131-b752-868df3fcd7f7] 2025-04-05 11:40:05.257507 | orchestrator | 11:40:05.257 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-04-05 11:40:05.269112 | orchestrator | 11:40:05.268 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 5s [id=f2bff80a-77f9-4739-8899-6b009859f217] 2025-04-05 11:40:05.437080 | orchestrator | 11:40:05.436 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 5s [id=05e67dad-c033-4120-aa9d-dba94bca6541] 2025-04-05 11:40:06.982467 | orchestrator | 11:40:06.981 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=bf28ed61-4ad0-4fe7-a65d-196fce7d107f] 2025-04-05 11:40:07.117681 | orchestrator | 11:40:07.117 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=3414a9fc-7f08-4ccc-a1f4-d4e08021d64f] 2025-04-05 11:40:07.151845 | orchestrator | 11:40:07.151 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=8f24ffed-c54b-45fe-9a61-bbd36698c300] 2025-04-05 11:40:07.273469 | orchestrator | 11:40:07.273 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=6cfad184-91b6-40ff-98e5-3a363b1c63df] 2025-04-05 11:40:07.310288 | orchestrator | 11:40:07.310 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-04-05 11:40:07.310526 | orchestrator | 11:40:07.310 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-04-05 11:40:07.313578 | orchestrator | 11:40:07.313 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-04-05 11:40:07.321114 | orchestrator | 11:40:07.320 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-04-05 11:40:07.326554 | orchestrator | 11:40:07.326 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-04-05 11:40:07.327863 | orchestrator | 11:40:07.327 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-04-05 11:40:11.466744 | orchestrator | 11:40:11.466 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=35a2e73b-2c2d-4328-a2d0-94d4859cf41b] 2025-04-05 11:40:11.487377 | orchestrator | 11:40:11.487 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-04-05 11:40:11.492929 | orchestrator | 11:40:11.492 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-04-05 11:40:11.494331 | orchestrator | 11:40:11.494 STDOUT terraform: local_file.inventory: Creating... 2025-04-05 11:40:11.497207 | orchestrator | 11:40:11.497 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=6bc617df006f0adead20eb86364ec2602cd35d6d] 2025-04-05 11:40:11.501768 | orchestrator | 11:40:11.501 STDOUT terraform: local_file.inventory: Creation complete after 1s [id=33c76e0d0d49332af33766aa01a2612b7853d7c7] 2025-04-05 11:40:12.134110 | orchestrator | 11:40:12.133 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=35a2e73b-2c2d-4328-a2d0-94d4859cf41b] 2025-04-05 11:40:17.319890 | orchestrator | 11:40:17.319 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-04-05 11:40:17.320025 | orchestrator | 11:40:17.319 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-04-05 11:40:17.320050 | orchestrator | 11:40:17.319 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-04-05 11:40:17.326048 | orchestrator | 11:40:17.325 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-04-05 11:40:17.328330 | orchestrator | 11:40:17.328 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-04-05 11:40:17.328471 | orchestrator | 11:40:17.328 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-04-05 11:40:27.323794 | orchestrator | 11:40:27.323 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-04-05 11:40:27.323924 | orchestrator | 11:40:27.323 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-04-05 11:40:27.324048 | orchestrator | 11:40:27.323 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-04-05 11:40:27.327000 | orchestrator | 11:40:27.326 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-04-05 11:40:27.329286 | orchestrator | 11:40:27.329 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-04-05 11:40:27.329431 | orchestrator | 11:40:27.329 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-04-05 11:40:27.936333 | orchestrator | 11:40:27.935 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=2ef23514-9431-4541-81fc-cc7b99c41260] 2025-04-05 11:40:37.327546 | orchestrator | 11:40:37.327 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-04-05 11:40:37.327683 | orchestrator | 11:40:37.327 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-04-05 11:40:37.327908 | orchestrator | 11:40:37.327 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-04-05 11:40:37.328106 | orchestrator | 11:40:37.327 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-04-05 11:40:37.329671 | orchestrator | 11:40:37.329 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-04-05 11:40:37.904450 | orchestrator | 11:40:37.904 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=e5e8b18d-de71-461c-9eed-0d2738d97fc1] 2025-04-05 11:40:37.965579 | orchestrator | 11:40:37.965 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=9797c5f6-4637-403a-a23f-1f61b8fe2219] 2025-04-05 11:40:37.968967 | orchestrator | 11:40:37.968 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=1583e485-3eda-4bb1-8994-a9bab24bee98] 2025-04-05 11:40:38.000760 | orchestrator | 11:40:38.000 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=f7d78a16-87a8-4c67-b555-b8aef87e427d] 2025-04-05 11:40:47.331520 | orchestrator | 11:40:47.331 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2025-04-05 11:40:48.343829 | orchestrator | 11:40:48.343 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=f092a966-4361-400f-b0d4-5ee5c9f359bc] 2025-04-05 11:40:48.370845 | orchestrator | 11:40:48.369 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-04-05 11:40:48.375301 | orchestrator | 11:40:48.375 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=8138926635688483092] 2025-04-05 11:40:48.375736 | orchestrator | 11:40:48.375 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-04-05 11:40:48.379510 | orchestrator | 11:40:48.379 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-04-05 11:40:48.379685 | orchestrator | 11:40:48.379 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-04-05 11:40:48.380466 | orchestrator | 11:40:48.380 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-04-05 11:40:48.383496 | orchestrator | 11:40:48.383 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-04-05 11:40:48.391298 | orchestrator | 11:40:48.391 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-04-05 11:40:48.394472 | orchestrator | 11:40:48.394 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-04-05 11:40:48.394760 | orchestrator | 11:40:48.394 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-04-05 11:40:48.401993 | orchestrator | 11:40:48.401 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-04-05 11:40:48.402804 | orchestrator | 11:40:48.402 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-04-05 11:40:53.710503 | orchestrator | 11:40:53.709 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 6s [id=f092a966-4361-400f-b0d4-5ee5c9f359bc/cfed707b-504f-4ce7-a138-034721a1d783] 2025-04-05 11:40:53.723625 | orchestrator | 11:40:53.723 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-04-05 11:40:53.737287 | orchestrator | 11:40:53.737 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=1583e485-3eda-4bb1-8994-a9bab24bee98/04fc0b0c-cc3e-463d-bd93-2065fe130691] 2025-04-05 11:40:53.749253 | orchestrator | 11:40:53.749 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=2ef23514-9431-4541-81fc-cc7b99c41260/ef34b75a-3d34-4e78-9c2f-2912cb587233] 2025-04-05 11:40:53.749798 | orchestrator | 11:40:53.749 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-04-05 11:40:53.759230 | orchestrator | 11:40:53.759 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-04-05 11:40:53.768462 | orchestrator | 11:40:53.768 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=f092a966-4361-400f-b0d4-5ee5c9f359bc/ba8d5f0c-914f-4739-9d89-312c5c9b23ff] 2025-04-05 11:40:53.775447 | orchestrator | 11:40:53.775 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-04-05 11:40:53.782371 | orchestrator | 11:40:53.782 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 6s [id=2ef23514-9431-4541-81fc-cc7b99c41260/d933188b-11f3-4ea5-a96e-67e7dafb4be4] 2025-04-05 11:40:53.789366 | orchestrator | 11:40:53.789 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-04-05 11:40:53.799434 | orchestrator | 11:40:53.799 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=1583e485-3eda-4bb1-8994-a9bab24bee98/d464d14f-d012-4c2b-ad7f-7584e12a8ff6] 2025-04-05 11:40:53.799802 | orchestrator | 11:40:53.799 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 6s [id=e5e8b18d-de71-461c-9eed-0d2738d97fc1/ff9999ad-bea3-493e-9af1-c705049c2ab2] 2025-04-05 11:40:53.805406 | orchestrator | 11:40:53.805 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 6s [id=9797c5f6-4637-403a-a23f-1f61b8fe2219/af9ec2c6-8790-4d7b-8704-1ac1d2bb5c9f] 2025-04-05 11:40:53.813900 | orchestrator | 11:40:53.813 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-04-05 11:40:53.819145 | orchestrator | 11:40:53.819 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-04-05 11:40:53.821431 | orchestrator | 11:40:53.821 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-04-05 11:40:53.828729 | orchestrator | 11:40:53.828 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 6s [id=1583e485-3eda-4bb1-8994-a9bab24bee98/3e7610b8-96df-421c-b96f-4d1684d93a4c] 2025-04-05 11:40:53.831846 | orchestrator | 11:40:53.831 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=e5e8b18d-de71-461c-9eed-0d2738d97fc1/213baff1-89a7-4ff7-8a44-f121feb76d26] 2025-04-05 11:40:53.844686 | orchestrator | 11:40:53.844 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-04-05 11:40:59.039001 | orchestrator | 11:40:59.038 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=2ef23514-9431-4541-81fc-cc7b99c41260/08ad3194-03e6-46c2-bf31-80971387f831] 2025-04-05 11:40:59.065338 | orchestrator | 11:40:59.064 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 5s [id=f7d78a16-87a8-4c67-b555-b8aef87e427d/1530be44-7738-4993-8ddf-f82dde1dd101] 2025-04-05 11:40:59.067378 | orchestrator | 11:40:59.067 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 5s [id=f092a966-4361-400f-b0d4-5ee5c9f359bc/5d2b1a52-3655-4f66-b4c6-42f0360176a6] 2025-04-05 11:40:59.140528 | orchestrator | 11:40:59.140 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=9797c5f6-4637-403a-a23f-1f61b8fe2219/1b7be43a-8a0c-4734-8b26-2b6a058e961f] 2025-04-05 11:40:59.157420 | orchestrator | 11:40:59.157 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=f7d78a16-87a8-4c67-b555-b8aef87e427d/2883817b-319c-4609-b3d8-ef6d07bb9413] 2025-04-05 11:40:59.169937 | orchestrator | 11:40:59.169 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 5s [id=e5e8b18d-de71-461c-9eed-0d2738d97fc1/4656da48-57a2-4eb8-982a-d76718d1cb02] 2025-04-05 11:40:59.182635 | orchestrator | 11:40:59.182 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=f7d78a16-87a8-4c67-b555-b8aef87e427d/05f1e5c2-483d-4605-9e0a-4b755f2c5af8] 2025-04-05 11:40:59.195735 | orchestrator | 11:40:59.195 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 5s [id=9797c5f6-4637-403a-a23f-1f61b8fe2219/3319eb17-1f94-4384-b4eb-d4656240927c] 2025-04-05 11:41:03.846165 | orchestrator | 11:41:03.845 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-04-05 11:41:09.137993 | orchestrator | 11:41:09.137 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 15s [id=0f7a648b-d1b2-418b-a26a-a259a4103464] 2025-04-05 11:41:09.151374 | orchestrator | 11:41:09.151 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-04-05 11:41:09.151464 | orchestrator | 11:41:09.151 STDOUT terraform: Outputs: 2025-04-05 11:41:09.151488 | orchestrator | 11:41:09.151 STDOUT terraform: manager_address = 2025-04-05 11:41:09.161371 | orchestrator | 11:41:09.151 STDOUT terraform: private_key = 2025-04-05 11:41:09.238848 | orchestrator | changed 2025-04-05 11:41:09.265879 | 2025-04-05 11:41:09.265989 | TASK [Create infrastructure (stable)] 2025-04-05 11:41:09.367647 | orchestrator | skipping: Conditional result was False 2025-04-05 11:41:09.379492 | 2025-04-05 11:41:09.379604 | TASK [Fetch manager address] 2025-04-05 11:41:19.812049 | orchestrator | ok 2025-04-05 11:41:19.828874 | 2025-04-05 11:41:19.829019 | TASK [Set manager_host address] 2025-04-05 11:41:19.944787 | orchestrator | ok 2025-04-05 11:41:19.953376 | 2025-04-05 11:41:19.953480 | LOOP [Update ansible collections] 2025-04-05 11:41:20.732105 | orchestrator | changed 2025-04-05 11:41:21.477517 | orchestrator | changed 2025-04-05 11:41:21.494581 | 2025-04-05 11:41:21.494696 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-04-05 11:41:32.044408 | orchestrator | ok 2025-04-05 11:41:32.058343 | 2025-04-05 11:41:32.058456 | TASK [Wait a little longer for the manager so that everything is ready] 2025-04-05 11:42:32.111884 | orchestrator | ok 2025-04-05 11:42:32.123749 | 2025-04-05 11:42:32.123864 | TASK [Fetch manager ssh hostkey] 2025-04-05 11:42:33.204934 | orchestrator | Output suppressed because no_log was given 2025-04-05 11:42:33.222396 | 2025-04-05 11:42:33.222536 | TASK [Get ssh keypair from terraform environment] 2025-04-05 11:42:33.780615 | orchestrator | changed 2025-04-05 11:42:33.806964 | 2025-04-05 11:42:33.807136 | TASK [Point out that the following task takes some time and does not give any output] 2025-04-05 11:42:33.852350 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-04-05 11:42:33.861551 | 2025-04-05 11:42:33.861667 | TASK [Run manager part 0] 2025-04-05 11:42:34.702732 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-04-05 11:42:34.743917 | orchestrator | 2025-04-05 11:42:36.332865 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-04-05 11:42:36.332910 | orchestrator | 2025-04-05 11:42:36.332931 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-04-05 11:42:36.332948 | orchestrator | ok: [testbed-manager] 2025-04-05 11:42:38.094195 | orchestrator | 2025-04-05 11:42:38.094295 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-04-05 11:42:38.094317 | orchestrator | 2025-04-05 11:42:38.094330 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-05 11:42:38.094350 | orchestrator | ok: [testbed-manager] 2025-04-05 11:42:38.715230 | orchestrator | 2025-04-05 11:42:38.715289 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-04-05 11:42:38.715313 | orchestrator | ok: [testbed-manager] 2025-04-05 11:42:38.763020 | orchestrator | 2025-04-05 11:42:38.763061 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-04-05 11:42:38.763076 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:42:38.793133 | orchestrator | 2025-04-05 11:42:38.793234 | orchestrator | TASK [Update package cache] **************************************************** 2025-04-05 11:42:38.793263 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:42:38.822409 | orchestrator | 2025-04-05 11:42:38.822459 | orchestrator | TASK [Install required packages] *********************************************** 2025-04-05 11:42:38.822477 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:42:38.853545 | orchestrator | 2025-04-05 11:42:38.853584 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-04-05 11:42:38.853597 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:42:38.884067 | orchestrator | 2025-04-05 11:42:38.884099 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-04-05 11:42:38.884115 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:42:38.915014 | orchestrator | 2025-04-05 11:42:38.915084 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-04-05 11:42:38.915112 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:42:38.947049 | orchestrator | 2025-04-05 11:42:38.947089 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-04-05 11:42:38.947100 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:42:39.662788 | orchestrator | 2025-04-05 11:42:39.662841 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-04-05 11:42:39.662860 | orchestrator | changed: [testbed-manager] 2025-04-05 11:45:22.100351 | orchestrator | 2025-04-05 11:45:22.100432 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-04-05 11:45:22.100477 | orchestrator | changed: [testbed-manager] 2025-04-05 11:46:41.463599 | orchestrator | 2025-04-05 11:46:41.463711 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-04-05 11:46:41.463746 | orchestrator | changed: [testbed-manager] 2025-04-05 11:46:59.340628 | orchestrator | 2025-04-05 11:46:59.340697 | orchestrator | TASK [Install required packages] *********************************************** 2025-04-05 11:46:59.340722 | orchestrator | changed: [testbed-manager] 2025-04-05 11:47:07.355012 | orchestrator | 2025-04-05 11:47:07.355142 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-04-05 11:47:07.355180 | orchestrator | changed: [testbed-manager] 2025-04-05 11:47:07.408307 | orchestrator | 2025-04-05 11:47:07.408377 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-04-05 11:47:07.408409 | orchestrator | ok: [testbed-manager] 2025-04-05 11:47:08.175731 | orchestrator | 2025-04-05 11:47:08.175844 | orchestrator | TASK [Get current user] ******************************************************** 2025-04-05 11:47:08.175878 | orchestrator | ok: [testbed-manager] 2025-04-05 11:47:08.880265 | orchestrator | 2025-04-05 11:47:08.880367 | orchestrator | TASK [Create venv directory] *************************************************** 2025-04-05 11:47:08.880410 | orchestrator | changed: [testbed-manager] 2025-04-05 11:47:15.094304 | orchestrator | 2025-04-05 11:47:15.094409 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-04-05 11:47:15.094445 | orchestrator | changed: [testbed-manager] 2025-04-05 11:47:20.756284 | orchestrator | 2025-04-05 11:47:20.756395 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-04-05 11:47:20.756443 | orchestrator | changed: [testbed-manager] 2025-04-05 11:47:23.227527 | orchestrator | 2025-04-05 11:47:23.227701 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-04-05 11:47:23.227738 | orchestrator | changed: [testbed-manager] 2025-04-05 11:47:24.866727 | orchestrator | 2025-04-05 11:47:24.866810 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-04-05 11:47:24.866840 | orchestrator | changed: [testbed-manager] 2025-04-05 11:47:25.925342 | orchestrator | 2025-04-05 11:47:25.925392 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-04-05 11:47:25.925410 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-04-05 11:47:25.969146 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-04-05 11:47:25.969237 | orchestrator | 2025-04-05 11:47:25.969257 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-04-05 11:47:25.969284 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-04-05 11:47:29.061734 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-04-05 11:47:29.061804 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-04-05 11:47:29.061815 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-04-05 11:47:29.061835 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-04-05 11:47:29.609278 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-04-05 11:47:29.609334 | orchestrator | 2025-04-05 11:47:29.609344 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-04-05 11:47:29.609361 | orchestrator | changed: [testbed-manager] 2025-04-05 11:47:52.099644 | orchestrator | 2025-04-05 11:47:52.099694 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-04-05 11:47:52.099711 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-04-05 11:47:54.304586 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-04-05 11:47:54.304626 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-04-05 11:47:54.304632 | orchestrator | 2025-04-05 11:47:54.304639 | orchestrator | TASK [Install local collections] *********************************************** 2025-04-05 11:47:54.304651 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-04-05 11:47:55.695473 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-04-05 11:47:55.695519 | orchestrator | 2025-04-05 11:47:55.695525 | orchestrator | PLAY [Create operator user] **************************************************** 2025-04-05 11:47:55.695531 | orchestrator | 2025-04-05 11:47:55.695536 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-05 11:47:55.695547 | orchestrator | ok: [testbed-manager] 2025-04-05 11:47:55.730746 | orchestrator | 2025-04-05 11:47:55.730792 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-04-05 11:47:55.730809 | orchestrator | ok: [testbed-manager] 2025-04-05 11:47:55.789476 | orchestrator | 2025-04-05 11:47:55.789523 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-04-05 11:47:55.789541 | orchestrator | ok: [testbed-manager] 2025-04-05 11:47:56.531812 | orchestrator | 2025-04-05 11:47:56.531861 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-04-05 11:47:56.531881 | orchestrator | changed: [testbed-manager] 2025-04-05 11:47:57.240614 | orchestrator | 2025-04-05 11:47:57.240708 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-04-05 11:47:57.240741 | orchestrator | changed: [testbed-manager] 2025-04-05 11:47:58.522830 | orchestrator | 2025-04-05 11:47:58.522924 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-04-05 11:47:58.522958 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-04-05 11:47:59.822132 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-04-05 11:47:59.822262 | orchestrator | 2025-04-05 11:47:59.822285 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-04-05 11:47:59.822314 | orchestrator | changed: [testbed-manager] 2025-04-05 11:48:01.487873 | orchestrator | 2025-04-05 11:48:01.487968 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-04-05 11:48:01.488000 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-04-05 11:48:02.033561 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-04-05 11:48:02.033613 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-04-05 11:48:02.033625 | orchestrator | 2025-04-05 11:48:02.033634 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-04-05 11:48:02.033653 | orchestrator | changed: [testbed-manager] 2025-04-05 11:48:02.103336 | orchestrator | 2025-04-05 11:48:02.103390 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-04-05 11:48:02.103406 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:48:02.928100 | orchestrator | 2025-04-05 11:48:02.928152 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-04-05 11:48:02.928172 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-05 11:48:02.967332 | orchestrator | changed: [testbed-manager] 2025-04-05 11:48:02.967373 | orchestrator | 2025-04-05 11:48:02.967383 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-04-05 11:48:02.967399 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:48:03.001093 | orchestrator | 2025-04-05 11:48:03.001134 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-04-05 11:48:03.001151 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:48:03.041065 | orchestrator | 2025-04-05 11:48:03.041102 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-04-05 11:48:03.041118 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:48:03.094104 | orchestrator | 2025-04-05 11:48:03.094143 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-04-05 11:48:03.094159 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:48:03.756457 | orchestrator | 2025-04-05 11:48:03.756557 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-04-05 11:48:03.756594 | orchestrator | ok: [testbed-manager] 2025-04-05 11:48:05.172535 | orchestrator | 2025-04-05 11:48:05.172578 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-04-05 11:48:05.172585 | orchestrator | 2025-04-05 11:48:05.172590 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-05 11:48:05.172601 | orchestrator | ok: [testbed-manager] 2025-04-05 11:48:06.107145 | orchestrator | 2025-04-05 11:48:06.107242 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-04-05 11:48:06.107271 | orchestrator | changed: [testbed-manager] 2025-04-05 11:48:06.215111 | orchestrator | 2025-04-05 11:48:06.215179 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 11:48:06.215187 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-04-05 11:48:06.215192 | orchestrator | 2025-04-05 11:48:06.584784 | orchestrator | changed 2025-04-05 11:48:06.603003 | 2025-04-05 11:48:06.603138 | TASK [Point out that the log in on the manager is now possible] 2025-04-05 11:48:06.653427 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-04-05 11:48:06.666222 | 2025-04-05 11:48:06.666360 | TASK [Point out that the following task takes some time and does not give any output] 2025-04-05 11:48:06.714599 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-04-05 11:48:06.724084 | 2025-04-05 11:48:06.724192 | TASK [Run manager part 1 + 2] 2025-04-05 11:48:07.600240 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-04-05 11:48:07.656558 | orchestrator | 2025-04-05 11:48:10.104530 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-04-05 11:48:10.104625 | orchestrator | 2025-04-05 11:48:10.104933 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-05 11:48:10.104981 | orchestrator | ok: [testbed-manager] 2025-04-05 11:48:10.139577 | orchestrator | 2025-04-05 11:48:10.139621 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-04-05 11:48:10.139639 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:48:10.182603 | orchestrator | 2025-04-05 11:48:10.182635 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-04-05 11:48:10.182647 | orchestrator | ok: [testbed-manager] 2025-04-05 11:48:10.228579 | orchestrator | 2025-04-05 11:48:10.228651 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-05 11:48:10.228674 | orchestrator | ok: [testbed-manager] 2025-04-05 11:48:10.299502 | orchestrator | 2025-04-05 11:48:10.299532 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-05 11:48:10.299544 | orchestrator | ok: [testbed-manager] 2025-04-05 11:48:10.362705 | orchestrator | 2025-04-05 11:48:10.362769 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-05 11:48:10.362797 | orchestrator | ok: [testbed-manager] 2025-04-05 11:48:10.402347 | orchestrator | 2025-04-05 11:48:10.402393 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-05 11:48:10.402420 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-04-05 11:48:11.078474 | orchestrator | 2025-04-05 11:48:11.078556 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-05 11:48:11.078587 | orchestrator | ok: [testbed-manager] 2025-04-05 11:48:11.125730 | orchestrator | 2025-04-05 11:48:11.125777 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-05 11:48:11.125794 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:48:12.413038 | orchestrator | 2025-04-05 11:48:12.413119 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-05 11:48:12.413165 | orchestrator | changed: [testbed-manager] 2025-04-05 11:48:12.958289 | orchestrator | 2025-04-05 11:48:12.958329 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-05 11:48:12.958345 | orchestrator | ok: [testbed-manager] 2025-04-05 11:48:14.091547 | orchestrator | 2025-04-05 11:48:14.091622 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-05 11:48:14.091653 | orchestrator | changed: [testbed-manager] 2025-04-05 11:48:25.670839 | orchestrator | 2025-04-05 11:48:25.670994 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-05 11:48:25.671021 | orchestrator | changed: [testbed-manager] 2025-04-05 11:48:26.282701 | orchestrator | 2025-04-05 11:48:26.282788 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-04-05 11:48:26.282820 | orchestrator | ok: [testbed-manager] 2025-04-05 11:48:26.337753 | orchestrator | 2025-04-05 11:48:26.337839 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-04-05 11:48:26.337873 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:48:27.289819 | orchestrator | 2025-04-05 11:48:27.289893 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-04-05 11:48:27.289921 | orchestrator | changed: [testbed-manager] 2025-04-05 11:48:28.184977 | orchestrator | 2025-04-05 11:48:28.185063 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-04-05 11:48:28.185093 | orchestrator | changed: [testbed-manager] 2025-04-05 11:48:28.739444 | orchestrator | 2025-04-05 11:48:28.739523 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-04-05 11:48:28.739553 | orchestrator | changed: [testbed-manager] 2025-04-05 11:48:28.779188 | orchestrator | 2025-04-05 11:48:28.779240 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-04-05 11:48:28.779254 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-04-05 11:48:30.814260 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-04-05 11:48:30.814332 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-04-05 11:48:30.814350 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-04-05 11:48:30.814374 | orchestrator | changed: [testbed-manager] 2025-04-05 11:48:39.260176 | orchestrator | 2025-04-05 11:48:39.260330 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-04-05 11:48:39.260354 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-04-05 11:48:40.279685 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-04-05 11:48:40.279795 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-04-05 11:48:40.279814 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-04-05 11:48:40.279831 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-04-05 11:48:40.279845 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-04-05 11:48:40.279859 | orchestrator | 2025-04-05 11:48:40.279874 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-04-05 11:48:40.279925 | orchestrator | changed: [testbed-manager] 2025-04-05 11:48:40.320102 | orchestrator | 2025-04-05 11:48:40.320220 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-04-05 11:48:40.320258 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:48:43.291764 | orchestrator | 2025-04-05 11:48:43.291810 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-04-05 11:48:43.291825 | orchestrator | changed: [testbed-manager] 2025-04-05 11:48:43.332573 | orchestrator | 2025-04-05 11:48:43.332653 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-04-05 11:48:43.332684 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:50:11.219998 | orchestrator | 2025-04-05 11:50:11.220052 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-04-05 11:50:11.220072 | orchestrator | changed: [testbed-manager] 2025-04-05 11:50:12.269440 | orchestrator | 2025-04-05 11:50:12.269535 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-05 11:50:12.269570 | orchestrator | ok: [testbed-manager] 2025-04-05 11:50:12.361926 | orchestrator | 2025-04-05 11:50:12.362149 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 11:50:12.362183 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-04-05 11:50:12.362190 | orchestrator | 2025-04-05 11:50:12.865136 | orchestrator | changed 2025-04-05 11:50:12.884215 | 2025-04-05 11:50:12.884338 | TASK [Reboot manager] 2025-04-05 11:50:14.427320 | orchestrator | changed 2025-04-05 11:50:14.447331 | 2025-04-05 11:50:14.447504 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-04-05 11:50:28.175204 | orchestrator | ok 2025-04-05 11:50:28.184860 | 2025-04-05 11:50:28.184971 | TASK [Wait a little longer for the manager so that everything is ready] 2025-04-05 11:51:28.224827 | orchestrator | ok 2025-04-05 11:51:28.236762 | 2025-04-05 11:51:28.236886 | TASK [Deploy manager + bootstrap nodes] 2025-04-05 11:51:30.330736 | orchestrator | 2025-04-05 11:51:30.334737 | orchestrator | # DEPLOY MANAGER 2025-04-05 11:51:30.334784 | orchestrator | 2025-04-05 11:51:30.334802 | orchestrator | + set -e 2025-04-05 11:51:30.334848 | orchestrator | + echo 2025-04-05 11:51:30.334866 | orchestrator | + echo '# DEPLOY MANAGER' 2025-04-05 11:51:30.334883 | orchestrator | + echo 2025-04-05 11:51:30.334907 | orchestrator | + cat /opt/manager-vars.sh 2025-04-05 11:51:30.334958 | orchestrator | export NUMBER_OF_NODES=6 2025-04-05 11:51:30.335667 | orchestrator | 2025-04-05 11:51:30.335700 | orchestrator | export CEPH_VERSION=quincy 2025-04-05 11:51:30.335727 | orchestrator | export CONFIGURATION_VERSION=main 2025-04-05 11:51:30.335753 | orchestrator | export MANAGER_VERSION=latest 2025-04-05 11:51:30.335777 | orchestrator | export OPENSTACK_VERSION=2024.1 2025-04-05 11:51:30.335803 | orchestrator | 2025-04-05 11:51:30.335829 | orchestrator | export ARA=false 2025-04-05 11:51:30.335853 | orchestrator | export TEMPEST=false 2025-04-05 11:51:30.335879 | orchestrator | export IS_ZUUL=true 2025-04-05 11:51:30.335905 | orchestrator | 2025-04-05 11:51:30.335930 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-04-05 11:51:30.335956 | orchestrator | export EXTERNAL_API=false 2025-04-05 11:51:30.335976 | orchestrator | 2025-04-05 11:51:30.335996 | orchestrator | export IMAGE_USER=ubuntu 2025-04-05 11:51:30.336021 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-04-05 11:51:30.336047 | orchestrator | 2025-04-05 11:51:30.336072 | orchestrator | export CEPH_STACK=ceph-ansible 2025-04-05 11:51:30.336088 | orchestrator | 2025-04-05 11:51:30.336109 | orchestrator | + echo 2025-04-05 11:51:30.336190 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-05 11:51:30.336223 | orchestrator | ++ export INTERACTIVE=false 2025-04-05 11:51:30.382466 | orchestrator | ++ INTERACTIVE=false 2025-04-05 11:51:30.382497 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-05 11:51:30.382524 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-05 11:51:30.382539 | orchestrator | + source /opt/manager-vars.sh 2025-04-05 11:51:30.382554 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-05 11:51:30.382568 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-05 11:51:30.382582 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-05 11:51:30.382597 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-05 11:51:30.382611 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-05 11:51:30.382625 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-05 11:51:30.382647 | orchestrator | ++ export MANAGER_VERSION=latest 2025-04-05 11:51:30.382662 | orchestrator | ++ MANAGER_VERSION=latest 2025-04-05 11:51:30.382676 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-05 11:51:30.382690 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-05 11:51:30.382704 | orchestrator | ++ export ARA=false 2025-04-05 11:51:30.382718 | orchestrator | ++ ARA=false 2025-04-05 11:51:30.382732 | orchestrator | ++ export TEMPEST=false 2025-04-05 11:51:30.382747 | orchestrator | ++ TEMPEST=false 2025-04-05 11:51:30.382761 | orchestrator | ++ export IS_ZUUL=true 2025-04-05 11:51:30.382775 | orchestrator | ++ IS_ZUUL=true 2025-04-05 11:51:30.382789 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-04-05 11:51:30.382803 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-04-05 11:51:30.382825 | orchestrator | ++ export EXTERNAL_API=false 2025-04-05 11:51:30.382840 | orchestrator | ++ EXTERNAL_API=false 2025-04-05 11:51:30.382854 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-05 11:51:30.382868 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-05 11:51:30.382882 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-05 11:51:30.382896 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-05 11:51:30.382913 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-05 11:51:30.382928 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-05 11:51:30.382943 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-04-05 11:51:30.382971 | orchestrator | + docker version 2025-04-05 11:51:30.626454 | orchestrator | Client: Docker Engine - Community 2025-04-05 11:51:30.629898 | orchestrator | Version: 27.5.1 2025-04-05 11:51:30.629927 | orchestrator | API version: 1.47 2025-04-05 11:51:30.629942 | orchestrator | Go version: go1.22.11 2025-04-05 11:51:30.629955 | orchestrator | Git commit: 9f9e405 2025-04-05 11:51:30.629970 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-04-05 11:51:30.629985 | orchestrator | OS/Arch: linux/amd64 2025-04-05 11:51:30.629998 | orchestrator | Context: default 2025-04-05 11:51:30.630012 | orchestrator | 2025-04-05 11:51:30.630069 | orchestrator | Server: Docker Engine - Community 2025-04-05 11:51:30.630084 | orchestrator | Engine: 2025-04-05 11:51:30.630097 | orchestrator | Version: 27.5.1 2025-04-05 11:51:30.630111 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-04-05 11:51:30.630154 | orchestrator | Go version: go1.22.11 2025-04-05 11:51:30.630171 | orchestrator | Git commit: 4c9b3b0 2025-04-05 11:51:30.630208 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-04-05 11:51:30.630223 | orchestrator | OS/Arch: linux/amd64 2025-04-05 11:51:30.630237 | orchestrator | Experimental: false 2025-04-05 11:51:30.630251 | orchestrator | containerd: 2025-04-05 11:51:30.630264 | orchestrator | Version: 1.7.27 2025-04-05 11:51:30.630278 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-04-05 11:51:30.630292 | orchestrator | runc: 2025-04-05 11:51:30.630306 | orchestrator | Version: 1.2.5 2025-04-05 11:51:30.630320 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-04-05 11:51:30.630334 | orchestrator | docker-init: 2025-04-05 11:51:30.630348 | orchestrator | Version: 0.19.0 2025-04-05 11:51:30.630362 | orchestrator | GitCommit: de40ad0 2025-04-05 11:51:30.630382 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-04-05 11:51:30.638414 | orchestrator | + set -e 2025-04-05 11:51:30.638696 | orchestrator | + source /opt/manager-vars.sh 2025-04-05 11:51:30.638716 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-05 11:51:30.638730 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-05 11:51:30.638744 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-05 11:51:30.638758 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-05 11:51:30.638772 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-05 11:51:30.638786 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-05 11:51:30.638800 | orchestrator | ++ export MANAGER_VERSION=latest 2025-04-05 11:51:30.638814 | orchestrator | ++ MANAGER_VERSION=latest 2025-04-05 11:51:30.638828 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-05 11:51:30.638841 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-05 11:51:30.638855 | orchestrator | ++ export ARA=false 2025-04-05 11:51:30.638869 | orchestrator | ++ ARA=false 2025-04-05 11:51:30.638883 | orchestrator | ++ export TEMPEST=false 2025-04-05 11:51:30.638896 | orchestrator | ++ TEMPEST=false 2025-04-05 11:51:30.638910 | orchestrator | ++ export IS_ZUUL=true 2025-04-05 11:51:30.638924 | orchestrator | ++ IS_ZUUL=true 2025-04-05 11:51:30.638938 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-04-05 11:51:30.638952 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-04-05 11:51:30.638966 | orchestrator | ++ export EXTERNAL_API=false 2025-04-05 11:51:30.638991 | orchestrator | ++ EXTERNAL_API=false 2025-04-05 11:51:30.639004 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-05 11:51:30.639023 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-05 11:51:30.639037 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-05 11:51:30.639051 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-05 11:51:30.639065 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-05 11:51:30.639084 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-05 11:51:30.639102 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-05 11:51:30.639116 | orchestrator | ++ export INTERACTIVE=false 2025-04-05 11:51:30.639154 | orchestrator | ++ INTERACTIVE=false 2025-04-05 11:51:30.639168 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-05 11:51:30.639182 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-05 11:51:30.639201 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-04-05 11:51:30.645036 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-04-05 11:51:30.645060 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh quincy 2025-04-05 11:51:30.645079 | orchestrator | + set -e 2025-04-05 11:51:30.645517 | orchestrator | + VERSION=quincy 2025-04-05 11:51:30.646054 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-04-05 11:51:30.651394 | orchestrator | + [[ -n ceph_version: quincy ]] 2025-04-05 11:51:30.656064 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: quincy/g' /opt/configuration/environments/manager/configuration.yml 2025-04-05 11:51:30.656092 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.1 2025-04-05 11:51:30.663057 | orchestrator | + set -e 2025-04-05 11:51:30.663522 | orchestrator | + VERSION=2024.1 2025-04-05 11:51:30.663545 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-04-05 11:51:30.666395 | orchestrator | + [[ -n openstack_version: 2024.1 ]] 2025-04-05 11:51:30.672152 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.1/g' /opt/configuration/environments/manager/configuration.yml 2025-04-05 11:51:30.672179 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-04-05 11:51:30.672873 | orchestrator | ++ semver latest 7.0.0 2025-04-05 11:51:30.732222 | orchestrator | + [[ -1 -ge 0 ]] 2025-04-05 11:51:30.732244 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-04-05 11:51:30.732259 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-04-05 11:51:30.732278 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-04-05 11:51:30.768620 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-05 11:51:30.771880 | orchestrator | + source /opt/venv/bin/activate 2025-04-05 11:51:30.772847 | orchestrator | ++ deactivate nondestructive 2025-04-05 11:51:30.772870 | orchestrator | ++ '[' -n '' ']' 2025-04-05 11:51:30.772887 | orchestrator | ++ '[' -n '' ']' 2025-04-05 11:51:30.772906 | orchestrator | ++ hash -r 2025-04-05 11:51:30.772924 | orchestrator | ++ '[' -n '' ']' 2025-04-05 11:51:30.773039 | orchestrator | ++ unset VIRTUAL_ENV 2025-04-05 11:51:30.773061 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-04-05 11:51:30.773315 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-04-05 11:51:30.773340 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-04-05 11:51:30.773478 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-04-05 11:51:30.773496 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-04-05 11:51:30.773515 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-04-05 11:51:30.773650 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-05 11:51:30.773669 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-05 11:51:30.773684 | orchestrator | ++ export PATH 2025-04-05 11:51:30.773698 | orchestrator | ++ '[' -n '' ']' 2025-04-05 11:51:30.773716 | orchestrator | ++ '[' -z '' ']' 2025-04-05 11:51:30.773777 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-04-05 11:51:30.773792 | orchestrator | ++ PS1='(venv) ' 2025-04-05 11:51:30.773806 | orchestrator | ++ export PS1 2025-04-05 11:51:30.773820 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-04-05 11:51:30.773834 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-04-05 11:51:30.773852 | orchestrator | ++ hash -r 2025-04-05 11:51:30.773974 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-04-05 11:51:31.857251 | orchestrator | 2025-04-05 11:51:32.415470 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-04-05 11:51:32.415556 | orchestrator | 2025-04-05 11:51:32.415572 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-05 11:51:32.415595 | orchestrator | ok: [testbed-manager] 2025-04-05 11:51:33.395304 | orchestrator | 2025-04-05 11:51:33.395420 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-04-05 11:51:33.395456 | orchestrator | changed: [testbed-manager] 2025-04-05 11:51:35.619468 | orchestrator | 2025-04-05 11:51:35.619584 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-04-05 11:51:35.619602 | orchestrator | 2025-04-05 11:51:35.619617 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-05 11:51:35.619650 | orchestrator | ok: [testbed-manager] 2025-04-05 11:51:40.444798 | orchestrator | 2025-04-05 11:51:40.444935 | orchestrator | TASK [Pull images] ************************************************************* 2025-04-05 11:51:40.444997 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-04-05 11:52:56.077350 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.7.2) 2025-04-05 11:52:56.077481 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:quincy) 2025-04-05 11:52:56.077499 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:latest) 2025-04-05 11:52:56.077513 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:2024.1) 2025-04-05 11:52:56.077526 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.2-alpine) 2025-04-05 11:52:56.077539 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.2.2) 2025-04-05 11:52:56.077552 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:latest) 2025-04-05 11:52:56.077564 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:latest) 2025-04-05 11:52:56.077577 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.8-alpine) 2025-04-05 11:52:56.077590 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.3.5) 2025-04-05 11:52:56.077602 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.19.0) 2025-04-05 11:52:56.077615 | orchestrator | 2025-04-05 11:52:56.077628 | orchestrator | TASK [Check status] ************************************************************ 2025-04-05 11:52:56.077683 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-05 11:52:56.124448 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-04-05 11:52:56.124481 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-04-05 11:52:56.124494 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-04-05 11:52:56.124508 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j147224640196.1550', 'results_file': '/home/dragon/.ansible_async/j147224640196.1550', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-04-05 11:52:56.124534 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j280180665279.1575', 'results_file': '/home/dragon/.ansible_async/j280180665279.1575', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.7.2', 'ansible_loop_var': 'item'}) 2025-04-05 11:52:56.124547 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-05 11:52:56.124566 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j562486915878.1600', 'results_file': '/home/dragon/.ansible_async/j562486915878.1600', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:quincy', 'ansible_loop_var': 'item'}) 2025-04-05 11:52:56.124580 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j104803379253.1632', 'results_file': '/home/dragon/.ansible_async/j104803379253.1632', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:latest', 'ansible_loop_var': 'item'}) 2025-04-05 11:52:56.124597 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j655413346903.1664', 'results_file': '/home/dragon/.ansible_async/j655413346903.1664', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:2024.1', 'ansible_loop_var': 'item'}) 2025-04-05 11:52:56.124610 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j323388747932.1696', 'results_file': '/home/dragon/.ansible_async/j323388747932.1696', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.2-alpine', 'ansible_loop_var': 'item'}) 2025-04-05 11:52:56.124623 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-05 11:52:56.124635 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-04-05 11:52:56.124648 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j300009700328.1728', 'results_file': '/home/dragon/.ansible_async/j300009700328.1728', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.2.2', 'ansible_loop_var': 'item'}) 2025-04-05 11:52:56.124664 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j895056900249.1767', 'results_file': '/home/dragon/.ansible_async/j895056900249.1767', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:latest', 'ansible_loop_var': 'item'}) 2025-04-05 11:52:56.124677 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j46863018633.1793', 'results_file': '/home/dragon/.ansible_async/j46863018633.1793', 'changed': True, 'item': 'registry.osism.tech/osism/osism:latest', 'ansible_loop_var': 'item'}) 2025-04-05 11:52:56.124690 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j429093572327.1824', 'results_file': '/home/dragon/.ansible_async/j429093572327.1824', 'changed': True, 'item': 'index.docker.io/library/postgres:16.8-alpine', 'ansible_loop_var': 'item'}) 2025-04-05 11:52:56.124702 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j38379648215.1864', 'results_file': '/home/dragon/.ansible_async/j38379648215.1864', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.3.5', 'ansible_loop_var': 'item'}) 2025-04-05 11:52:56.124730 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j638161787230.1897', 'results_file': '/home/dragon/.ansible_async/j638161787230.1897', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.19.0', 'ansible_loop_var': 'item'}) 2025-04-05 11:52:56.124742 | orchestrator | 2025-04-05 11:52:56.124755 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-04-05 11:52:56.124775 | orchestrator | ok: [testbed-manager] 2025-04-05 11:52:56.585696 | orchestrator | 2025-04-05 11:52:56.585752 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-04-05 11:52:56.585774 | orchestrator | changed: [testbed-manager] 2025-04-05 11:52:56.922545 | orchestrator | 2025-04-05 11:52:56.922697 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-04-05 11:52:56.922751 | orchestrator | changed: [testbed-manager] 2025-04-05 11:52:57.243880 | orchestrator | 2025-04-05 11:52:57.243984 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-04-05 11:52:57.244017 | orchestrator | changed: [testbed-manager] 2025-04-05 11:52:57.280735 | orchestrator | 2025-04-05 11:52:57.280766 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-04-05 11:52:57.280810 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:52:57.591380 | orchestrator | 2025-04-05 11:52:57.591467 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-04-05 11:52:57.591486 | orchestrator | ok: [testbed-manager] 2025-04-05 11:52:57.732103 | orchestrator | 2025-04-05 11:52:57.732119 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-04-05 11:52:57.732130 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:52:59.486674 | orchestrator | 2025-04-05 11:52:59.486769 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-04-05 11:52:59.486781 | orchestrator | 2025-04-05 11:52:59.486791 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-05 11:52:59.486813 | orchestrator | ok: [testbed-manager] 2025-04-05 11:52:59.688590 | orchestrator | 2025-04-05 11:52:59.688647 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-04-05 11:52:59.688666 | orchestrator | 2025-04-05 11:52:59.781951 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-04-05 11:52:59.781996 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-04-05 11:53:00.826230 | orchestrator | 2025-04-05 11:53:00.826362 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-04-05 11:53:00.826405 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-04-05 11:53:02.550983 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-04-05 11:53:02.551152 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-04-05 11:53:02.551174 | orchestrator | 2025-04-05 11:53:02.551190 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-04-05 11:53:02.551223 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-04-05 11:53:03.201362 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-04-05 11:53:03.201468 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-04-05 11:53:03.201485 | orchestrator | 2025-04-05 11:53:03.201516 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-04-05 11:53:03.201557 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-05 11:53:03.798950 | orchestrator | changed: [testbed-manager] 2025-04-05 11:53:03.799049 | orchestrator | 2025-04-05 11:53:03.799065 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-04-05 11:53:03.799119 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-05 11:53:03.869545 | orchestrator | changed: [testbed-manager] 2025-04-05 11:53:03.869618 | orchestrator | 2025-04-05 11:53:03.869634 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-04-05 11:53:03.869661 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:53:04.221191 | orchestrator | 2025-04-05 11:53:04.221294 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-04-05 11:53:04.221327 | orchestrator | ok: [testbed-manager] 2025-04-05 11:53:04.314409 | orchestrator | 2025-04-05 11:53:04.314447 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-04-05 11:53:04.314469 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-04-05 11:53:05.345850 | orchestrator | 2025-04-05 11:53:05.345914 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-04-05 11:53:05.345942 | orchestrator | changed: [testbed-manager] 2025-04-05 11:53:06.116661 | orchestrator | 2025-04-05 11:53:06.116762 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-04-05 11:53:06.116793 | orchestrator | changed: [testbed-manager] 2025-04-05 11:53:09.115940 | orchestrator | 2025-04-05 11:53:09.116059 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-04-05 11:53:09.116127 | orchestrator | changed: [testbed-manager] 2025-04-05 11:53:09.389536 | orchestrator | 2025-04-05 11:53:09.389633 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-04-05 11:53:09.389666 | orchestrator | 2025-04-05 11:53:09.568261 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-04-05 11:53:09.568372 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-04-05 11:53:11.839946 | orchestrator | 2025-04-05 11:53:11.840033 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-04-05 11:53:11.840063 | orchestrator | ok: [testbed-manager] 2025-04-05 11:53:11.968030 | orchestrator | 2025-04-05 11:53:11.968061 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-04-05 11:53:11.968120 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-04-05 11:53:13.083576 | orchestrator | 2025-04-05 11:53:13.083671 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-04-05 11:53:13.083701 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-04-05 11:53:13.182610 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-04-05 11:53:13.182659 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-04-05 11:53:13.182672 | orchestrator | 2025-04-05 11:53:13.182686 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-04-05 11:53:13.182709 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-04-05 11:53:13.792587 | orchestrator | 2025-04-05 11:53:13.792693 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-04-05 11:53:13.792728 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-04-05 11:53:14.414589 | orchestrator | 2025-04-05 11:53:14.414693 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-04-05 11:53:14.414727 | orchestrator | changed: [testbed-manager] 2025-04-05 11:53:15.045564 | orchestrator | 2025-04-05 11:53:15.045665 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-04-05 11:53:15.045695 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-05 11:53:15.443140 | orchestrator | changed: [testbed-manager] 2025-04-05 11:53:15.443239 | orchestrator | 2025-04-05 11:53:15.443256 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-04-05 11:53:15.443287 | orchestrator | changed: [testbed-manager] 2025-04-05 11:53:15.775445 | orchestrator | 2025-04-05 11:53:15.775484 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-04-05 11:53:15.775508 | orchestrator | ok: [testbed-manager] 2025-04-05 11:53:15.829438 | orchestrator | 2025-04-05 11:53:15.829466 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-04-05 11:53:15.829485 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:53:16.445682 | orchestrator | 2025-04-05 11:53:16.445788 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-04-05 11:53:16.445845 | orchestrator | changed: [testbed-manager] 2025-04-05 11:53:16.543142 | orchestrator | 2025-04-05 11:53:16.543171 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-04-05 11:53:16.543190 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-04-05 11:53:17.264674 | orchestrator | 2025-04-05 11:53:17.264795 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-04-05 11:53:17.264843 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-04-05 11:53:17.892352 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-04-05 11:53:17.893054 | orchestrator | 2025-04-05 11:53:17.893115 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-04-05 11:53:17.893146 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-04-05 11:53:18.537789 | orchestrator | 2025-04-05 11:53:18.537893 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-04-05 11:53:18.537931 | orchestrator | changed: [testbed-manager] 2025-04-05 11:53:18.590386 | orchestrator | 2025-04-05 11:53:18.590451 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-04-05 11:53:18.590482 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:53:19.237652 | orchestrator | 2025-04-05 11:53:19.237756 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-04-05 11:53:19.237789 | orchestrator | changed: [testbed-manager] 2025-04-05 11:53:20.973281 | orchestrator | 2025-04-05 11:53:20.973368 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-04-05 11:53:20.973398 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-05 11:53:26.554882 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-05 11:53:26.554988 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-05 11:53:26.555004 | orchestrator | changed: [testbed-manager] 2025-04-05 11:53:26.555019 | orchestrator | 2025-04-05 11:53:26.555033 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-04-05 11:53:26.555063 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-04-05 11:53:27.171652 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-04-05 11:53:27.171722 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-04-05 11:53:27.171736 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-04-05 11:53:27.171749 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-04-05 11:53:27.171762 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-04-05 11:53:27.171774 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-04-05 11:53:27.171787 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-04-05 11:53:27.171800 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-04-05 11:53:27.171813 | orchestrator | changed: [testbed-manager] => (item=users) 2025-04-05 11:53:27.171825 | orchestrator | 2025-04-05 11:53:27.171838 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-04-05 11:53:27.171863 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-04-05 11:53:27.335490 | orchestrator | 2025-04-05 11:53:27.335520 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-04-05 11:53:27.335542 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-04-05 11:53:28.006506 | orchestrator | 2025-04-05 11:53:28.006558 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-04-05 11:53:28.006585 | orchestrator | changed: [testbed-manager] 2025-04-05 11:53:28.610694 | orchestrator | 2025-04-05 11:53:28.610771 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-04-05 11:53:28.610799 | orchestrator | ok: [testbed-manager] 2025-04-05 11:53:29.339606 | orchestrator | 2025-04-05 11:53:29.339701 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-04-05 11:53:29.339733 | orchestrator | changed: [testbed-manager] 2025-04-05 11:53:31.495503 | orchestrator | 2025-04-05 11:53:31.496295 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-04-05 11:53:31.496457 | orchestrator | ok: [testbed-manager] 2025-04-05 11:53:32.420729 | orchestrator | 2025-04-05 11:53:32.420832 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-04-05 11:53:32.420866 | orchestrator | ok: [testbed-manager] 2025-04-05 11:53:54.366382 | orchestrator | 2025-04-05 11:53:54.366502 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-04-05 11:53:54.366531 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-04-05 11:53:54.435328 | orchestrator | ok: [testbed-manager] 2025-04-05 11:53:54.435371 | orchestrator | 2025-04-05 11:53:54.435382 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-04-05 11:53:54.435401 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:53:54.484626 | orchestrator | 2025-04-05 11:53:54.484646 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-04-05 11:53:54.484656 | orchestrator | 2025-04-05 11:53:54.484666 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-04-05 11:53:54.484679 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:53:54.568188 | orchestrator | 2025-04-05 11:53:54.568207 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-04-05 11:53:54.568222 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-04-05 11:53:55.287104 | orchestrator | 2025-04-05 11:53:55.287201 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-04-05 11:53:55.287230 | orchestrator | ok: [testbed-manager] 2025-04-05 11:53:55.368391 | orchestrator | 2025-04-05 11:53:55.368441 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-04-05 11:53:55.368463 | orchestrator | ok: [testbed-manager] 2025-04-05 11:53:55.426400 | orchestrator | 2025-04-05 11:53:55.426429 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-04-05 11:53:55.426447 | orchestrator | ok: [testbed-manager] => { 2025-04-05 11:53:55.974082 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-04-05 11:53:55.974176 | orchestrator | } 2025-04-05 11:53:55.974192 | orchestrator | 2025-04-05 11:53:55.974207 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-04-05 11:53:55.974236 | orchestrator | ok: [testbed-manager] 2025-04-05 11:53:56.761454 | orchestrator | 2025-04-05 11:53:56.761566 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-04-05 11:53:56.761600 | orchestrator | ok: [testbed-manager] 2025-04-05 11:53:56.839097 | orchestrator | 2025-04-05 11:53:56.839137 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-04-05 11:53:56.839158 | orchestrator | ok: [testbed-manager] 2025-04-05 11:53:56.890158 | orchestrator | 2025-04-05 11:53:56.890200 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-04-05 11:53:56.890223 | orchestrator | ok: [testbed-manager] => { 2025-04-05 11:53:56.938858 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-04-05 11:53:56.938890 | orchestrator | } 2025-04-05 11:53:56.938904 | orchestrator | 2025-04-05 11:53:56.938919 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-04-05 11:53:56.938939 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:53:57.004875 | orchestrator | 2025-04-05 11:53:57.004925 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-04-05 11:53:57.004948 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:53:57.079386 | orchestrator | 2025-04-05 11:53:57.079416 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-04-05 11:53:57.079437 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:53:57.147618 | orchestrator | 2025-04-05 11:53:57.147658 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-04-05 11:53:57.147680 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:53:57.195853 | orchestrator | 2025-04-05 11:53:57.195881 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-04-05 11:53:57.195902 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:53:57.255936 | orchestrator | 2025-04-05 11:53:57.255998 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-04-05 11:53:57.256020 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:53:58.420665 | orchestrator | 2025-04-05 11:53:58.420775 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-04-05 11:53:58.420823 | orchestrator | changed: [testbed-manager] 2025-04-05 11:53:58.538590 | orchestrator | 2025-04-05 11:53:58.538624 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-04-05 11:53:58.538646 | orchestrator | ok: [testbed-manager] 2025-04-05 11:54:58.595009 | orchestrator | 2025-04-05 11:54:58.595190 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-04-05 11:54:58.595231 | orchestrator | Pausing for 60 seconds 2025-04-05 11:54:58.683248 | orchestrator | changed: [testbed-manager] 2025-04-05 11:54:58.683345 | orchestrator | 2025-04-05 11:54:58.683366 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-04-05 11:54:58.683395 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-04-05 11:57:55.980604 | orchestrator | 2025-04-05 11:57:55.980745 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-04-05 11:57:55.980788 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-04-05 11:57:57.808921 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-04-05 11:57:57.809075 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-04-05 11:57:57.809093 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-04-05 11:57:57.809108 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-04-05 11:57:57.809122 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-04-05 11:57:57.809137 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-04-05 11:57:57.809151 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-04-05 11:57:57.809165 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-04-05 11:57:57.809179 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-04-05 11:57:57.809193 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-04-05 11:57:57.809207 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-04-05 11:57:57.809221 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-04-05 11:57:57.809235 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-04-05 11:57:57.809249 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-04-05 11:57:57.809263 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-04-05 11:57:57.809277 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-04-05 11:57:57.809292 | orchestrator | changed: [testbed-manager] 2025-04-05 11:57:57.809307 | orchestrator | 2025-04-05 11:57:57.809323 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-04-05 11:57:57.809337 | orchestrator | 2025-04-05 11:57:57.809351 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-05 11:57:57.809382 | orchestrator | ok: [testbed-manager] 2025-04-05 11:57:57.895033 | orchestrator | 2025-04-05 11:57:57.895080 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-04-05 11:57:57.895103 | orchestrator | 2025-04-05 11:57:57.948063 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-04-05 11:57:57.948130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-04-05 11:57:59.381084 | orchestrator | 2025-04-05 11:57:59.381189 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-04-05 11:57:59.381222 | orchestrator | ok: [testbed-manager] 2025-04-05 11:57:59.430720 | orchestrator | 2025-04-05 11:57:59.430748 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-04-05 11:57:59.430768 | orchestrator | ok: [testbed-manager] 2025-04-05 11:57:59.519529 | orchestrator | 2025-04-05 11:57:59.519559 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-04-05 11:57:59.519579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-04-05 11:58:02.228673 | orchestrator | 2025-04-05 11:58:02.228804 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-04-05 11:58:02.228842 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-04-05 11:58:02.831018 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-04-05 11:58:02.831084 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-04-05 11:58:02.831101 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-04-05 11:58:02.831116 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-04-05 11:58:02.831131 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-04-05 11:58:02.831146 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-04-05 11:58:02.831160 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-04-05 11:58:02.831174 | orchestrator | 2025-04-05 11:58:02.831188 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-04-05 11:58:02.831216 | orchestrator | changed: [testbed-manager] 2025-04-05 11:58:02.915433 | orchestrator | 2025-04-05 11:58:02.915496 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-04-05 11:58:02.915522 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-04-05 11:58:04.078296 | orchestrator | 2025-04-05 11:58:04.078397 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-04-05 11:58:04.078429 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-04-05 11:58:04.673357 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-04-05 11:58:04.673466 | orchestrator | 2025-04-05 11:58:04.673486 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-04-05 11:58:04.673516 | orchestrator | changed: [testbed-manager] 2025-04-05 11:58:04.726215 | orchestrator | 2025-04-05 11:58:04.726262 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-04-05 11:58:04.726297 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:58:04.776782 | orchestrator | 2025-04-05 11:58:04.776833 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-04-05 11:58:04.776856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-04-05 11:58:06.085530 | orchestrator | 2025-04-05 11:58:06.085636 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-04-05 11:58:06.085667 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-05 11:58:06.685783 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-05 11:58:06.685906 | orchestrator | changed: [testbed-manager] 2025-04-05 11:58:06.685927 | orchestrator | 2025-04-05 11:58:06.685944 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-04-05 11:58:06.686093 | orchestrator | changed: [testbed-manager] 2025-04-05 11:58:06.776175 | orchestrator | 2025-04-05 11:58:06.776264 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-04-05 11:58:06.776295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-04-05 11:58:07.367775 | orchestrator | 2025-04-05 11:58:07.367914 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-04-05 11:58:07.367948 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-05 11:58:07.946701 | orchestrator | changed: [testbed-manager] 2025-04-05 11:58:07.946771 | orchestrator | 2025-04-05 11:58:07.946786 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-04-05 11:58:07.946811 | orchestrator | changed: [testbed-manager] 2025-04-05 11:58:08.056072 | orchestrator | 2025-04-05 11:58:08.056111 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-04-05 11:58:08.056132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-04-05 11:58:08.552817 | orchestrator | 2025-04-05 11:58:08.552890 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-04-05 11:58:08.552915 | orchestrator | changed: [testbed-manager] 2025-04-05 11:58:08.955088 | orchestrator | 2025-04-05 11:58:08.955154 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-04-05 11:58:08.955179 | orchestrator | changed: [testbed-manager] 2025-04-05 11:58:10.158836 | orchestrator | 2025-04-05 11:58:10.158944 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-04-05 11:58:10.159016 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-04-05 11:58:10.772434 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-04-05 11:58:10.772522 | orchestrator | 2025-04-05 11:58:10.772538 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-04-05 11:58:10.772567 | orchestrator | changed: [testbed-manager] 2025-04-05 11:58:11.132154 | orchestrator | 2025-04-05 11:58:11.132264 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-04-05 11:58:11.132297 | orchestrator | ok: [testbed-manager] 2025-04-05 11:58:11.483877 | orchestrator | 2025-04-05 11:58:11.484012 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-04-05 11:58:11.484046 | orchestrator | changed: [testbed-manager] 2025-04-05 11:58:11.523825 | orchestrator | 2025-04-05 11:58:11.523858 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-04-05 11:58:11.523879 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:58:11.649868 | orchestrator | 2025-04-05 11:58:11.649929 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-04-05 11:58:11.649965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-04-05 11:58:11.698875 | orchestrator | 2025-04-05 11:58:11.698924 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-04-05 11:58:11.699433 | orchestrator | ok: [testbed-manager] 2025-04-05 11:58:13.568149 | orchestrator | 2025-04-05 11:58:13.568261 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-04-05 11:58:13.568295 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-04-05 11:58:14.248848 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-04-05 11:58:14.248946 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-04-05 11:58:14.248961 | orchestrator | 2025-04-05 11:58:14.249016 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-04-05 11:58:14.249047 | orchestrator | changed: [testbed-manager] 2025-04-05 11:58:14.937390 | orchestrator | 2025-04-05 11:58:14.937493 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-04-05 11:58:14.937525 | orchestrator | changed: [testbed-manager] 2025-04-05 11:58:15.011383 | orchestrator | 2025-04-05 11:58:15.011416 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-04-05 11:58:15.011438 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-04-05 11:58:15.046727 | orchestrator | 2025-04-05 11:58:15.046762 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-04-05 11:58:15.046782 | orchestrator | ok: [testbed-manager] 2025-04-05 11:58:15.700766 | orchestrator | 2025-04-05 11:58:15.700863 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-04-05 11:58:15.700920 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-04-05 11:58:15.782537 | orchestrator | 2025-04-05 11:58:15.782602 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-04-05 11:58:15.782627 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-04-05 11:58:16.454228 | orchestrator | 2025-04-05 11:58:16.454327 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-04-05 11:58:16.454358 | orchestrator | changed: [testbed-manager] 2025-04-05 11:58:17.056889 | orchestrator | 2025-04-05 11:58:17.057030 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-04-05 11:58:17.057072 | orchestrator | ok: [testbed-manager] 2025-04-05 11:58:17.111707 | orchestrator | 2025-04-05 11:58:17.111744 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-04-05 11:58:17.111767 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:58:17.169453 | orchestrator | 2025-04-05 11:58:17.169483 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-04-05 11:58:17.169504 | orchestrator | ok: [testbed-manager] 2025-04-05 11:58:17.961480 | orchestrator | 2025-04-05 11:58:17.961601 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-04-05 11:58:17.961639 | orchestrator | changed: [testbed-manager] 2025-04-05 11:58:56.649916 | orchestrator | 2025-04-05 11:58:56.650136 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-04-05 11:58:56.650169 | orchestrator | changed: [testbed-manager] 2025-04-05 11:58:57.290320 | orchestrator | 2025-04-05 11:58:57.290374 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-04-05 11:58:57.290397 | orchestrator | ok: [testbed-manager] 2025-04-05 11:58:59.753332 | orchestrator | 2025-04-05 11:58:59.753443 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-04-05 11:58:59.753477 | orchestrator | changed: [testbed-manager] 2025-04-05 11:58:59.806423 | orchestrator | 2025-04-05 11:58:59.806452 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-04-05 11:58:59.806474 | orchestrator | ok: [testbed-manager] 2025-04-05 11:58:59.863689 | orchestrator | 2025-04-05 11:58:59.863715 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-04-05 11:58:59.863729 | orchestrator | 2025-04-05 11:58:59.863744 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-04-05 11:58:59.863764 | orchestrator | skipping: [testbed-manager] 2025-04-05 11:59:59.918342 | orchestrator | 2025-04-05 11:59:59.918474 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-04-05 11:59:59.918510 | orchestrator | Pausing for 60 seconds 2025-04-05 12:00:03.688985 | orchestrator | changed: [testbed-manager] 2025-04-05 12:00:03.689114 | orchestrator | 2025-04-05 12:00:03.689135 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-04-05 12:00:03.689167 | orchestrator | changed: [testbed-manager] 2025-04-05 12:00:45.119603 | orchestrator | 2025-04-05 12:00:45.119744 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-04-05 12:00:45.119784 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-04-05 12:00:52.820229 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-04-05 12:00:52.820364 | orchestrator | changed: [testbed-manager] 2025-04-05 12:00:52.820385 | orchestrator | 2025-04-05 12:00:52.820400 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-04-05 12:00:52.820442 | orchestrator | changed: [testbed-manager] 2025-04-05 12:00:52.901422 | orchestrator | 2025-04-05 12:00:52.901517 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-04-05 12:00:52.901550 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-04-05 12:00:52.959090 | orchestrator | 2025-04-05 12:00:52.959171 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-04-05 12:00:52.959188 | orchestrator | 2025-04-05 12:00:52.959203 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-04-05 12:00:52.959261 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:00:53.074768 | orchestrator | 2025-04-05 12:00:53.074850 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:00:53.074870 | orchestrator | testbed-manager : ok=105 changed=56 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-04-05 12:00:53.074885 | orchestrator | 2025-04-05 12:00:53.074930 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-05 12:00:53.079976 | orchestrator | + deactivate 2025-04-05 12:00:53.080025 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-04-05 12:00:53.080034 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-05 12:00:53.080042 | orchestrator | + export PATH 2025-04-05 12:00:53.080049 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-04-05 12:00:53.080057 | orchestrator | + '[' -n '' ']' 2025-04-05 12:00:53.080064 | orchestrator | + hash -r 2025-04-05 12:00:53.080071 | orchestrator | + '[' -n '' ']' 2025-04-05 12:00:53.080078 | orchestrator | + unset VIRTUAL_ENV 2025-04-05 12:00:53.080086 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-04-05 12:00:53.080093 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-04-05 12:00:53.080100 | orchestrator | + unset -f deactivate 2025-04-05 12:00:53.080108 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-04-05 12:00:53.080123 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-04-05 12:00:53.080563 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-04-05 12:00:53.080575 | orchestrator | + local max_attempts=60 2025-04-05 12:00:53.080583 | orchestrator | + local name=ceph-ansible 2025-04-05 12:00:53.080590 | orchestrator | + local attempt_num=1 2025-04-05 12:00:53.080602 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-04-05 12:00:53.117232 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-05 12:00:53.117573 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-04-05 12:00:53.117681 | orchestrator | + local max_attempts=60 2025-04-05 12:00:53.118321 | orchestrator | + local name=kolla-ansible 2025-04-05 12:00:53.118344 | orchestrator | + local attempt_num=1 2025-04-05 12:00:53.118362 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-04-05 12:00:53.147826 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-05 12:00:53.148102 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-04-05 12:00:53.148192 | orchestrator | + local max_attempts=60 2025-04-05 12:00:53.148212 | orchestrator | + local name=osism-ansible 2025-04-05 12:00:53.148227 | orchestrator | + local attempt_num=1 2025-04-05 12:00:53.148255 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-04-05 12:00:53.176506 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-05 12:00:53.866413 | orchestrator | + [[ true == \t\r\u\e ]] 2025-04-05 12:00:53.866497 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-04-05 12:00:53.866527 | orchestrator | ++ semver latest 9.0.0 2025-04-05 12:00:53.916303 | orchestrator | + [[ -1 -ge 0 ]] 2025-04-05 12:00:53.917473 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-04-05 12:00:53.917504 | orchestrator | + wait_for_container_healthy 60 netbox-netbox-1 2025-04-05 12:00:53.917521 | orchestrator | + local max_attempts=60 2025-04-05 12:00:53.917536 | orchestrator | + local name=netbox-netbox-1 2025-04-05 12:00:53.917552 | orchestrator | + local attempt_num=1 2025-04-05 12:00:53.917574 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' netbox-netbox-1 2025-04-05 12:00:53.954337 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-05 12:00:53.962395 | orchestrator | + /opt/configuration/scripts/bootstrap/000-netbox.sh 2025-04-05 12:00:53.962491 | orchestrator | + set -e 2025-04-05 12:00:55.693172 | orchestrator | + osism manage netbox --parallel 4 2025-04-05 12:00:55.693303 | orchestrator | 2025-04-05 12:00:55 | INFO  | It takes a moment until task f17e97fd-4522-4875-aa72-28a21560701a (netbox-manager) has been started and output is visible here. 2025-04-05 12:00:57.586448 | orchestrator | 2025-04-05 12:00:57 | INFO  | Wait for NetBox service 2025-04-05 12:00:59.614321 | orchestrator | 2025-04-05 12:00:59.614767 | orchestrator | PLAY [Wait for NetBox service] ************************************************* 2025-04-05 12:00:59.689114 | orchestrator | 2025-04-05 12:00:59.689444 | orchestrator | TASK [Wait for NetBox service REST API] **************************************** 2025-04-05 12:01:06.564772 | orchestrator | ok: [localhost] 2025-04-05 12:01:06.565791 | orchestrator | 2025-04-05 12:01:06.566483 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:01:06.566871 | orchestrator | 2025-04-05 12:01:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:01:06.566902 | orchestrator | 2025-04-05 12:01:06 | INFO  | Please wait and do not abort execution. 2025-04-05 12:01:06.568253 | orchestrator | localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:01:07.116281 | orchestrator | 2025-04-05 12:01:07 | INFO  | Manage devicetypes 2025-04-05 12:01:09.415898 | orchestrator | 2025-04-05 12:01:09 | INFO  | Manage moduletypes 2025-04-05 12:01:09.536874 | orchestrator | 2025-04-05 12:01:09 | INFO  | Manage resources 2025-04-05 12:01:09.549754 | orchestrator | 2025-04-05 12:01:09 | INFO  | Handle file /netbox/resources/100-initialise.yml 2025-04-05 12:01:10.485690 | orchestrator | IGNORE_SSL_ERRORS is True, catching exception and disabling SSL verification. 2025-04-05 12:01:10.486791 | orchestrator | Manufacturer queued for addition: Arista 2025-04-05 12:01:10.486835 | orchestrator | Manufacturer queued for addition: Other 2025-04-05 12:01:10.487719 | orchestrator | Manufacturer Created: Arista - 2 2025-04-05 12:01:10.488339 | orchestrator | Manufacturer Created: Other - 3 2025-04-05 12:01:10.489283 | orchestrator | Device Type Created: Arista - DCS-7050TX3-48C8 - 2 2025-04-05 12:01:10.490494 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 2 - 1 2025-04-05 12:01:10.491012 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 2 - 2 2025-04-05 12:01:10.491997 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 2 - 3 2025-04-05 12:01:10.492739 | orchestrator | Interface Template Created: Ethernet4 - 10GBASE-T (10GE) - 2 - 4 2025-04-05 12:01:10.493632 | orchestrator | Interface Template Created: Ethernet5 - 10GBASE-T (10GE) - 2 - 5 2025-04-05 12:01:10.494390 | orchestrator | Interface Template Created: Ethernet6 - 10GBASE-T (10GE) - 2 - 6 2025-04-05 12:01:10.495308 | orchestrator | Interface Template Created: Ethernet7 - 10GBASE-T (10GE) - 2 - 7 2025-04-05 12:01:10.495978 | orchestrator | Interface Template Created: Ethernet8 - 10GBASE-T (10GE) - 2 - 8 2025-04-05 12:01:10.496879 | orchestrator | Interface Template Created: Ethernet9 - 10GBASE-T (10GE) - 2 - 9 2025-04-05 12:01:10.497762 | orchestrator | Interface Template Created: Ethernet10 - 10GBASE-T (10GE) - 2 - 10 2025-04-05 12:01:10.498227 | orchestrator | Interface Template Created: Ethernet11 - 10GBASE-T (10GE) - 2 - 11 2025-04-05 12:01:10.498811 | orchestrator | Interface Template Created: Ethernet12 - 10GBASE-T (10GE) - 2 - 12 2025-04-05 12:01:10.499603 | orchestrator | Interface Template Created: Ethernet13 - 10GBASE-T (10GE) - 2 - 13 2025-04-05 12:01:10.500371 | orchestrator | Interface Template Created: Ethernet14 - 10GBASE-T (10GE) - 2 - 14 2025-04-05 12:01:10.500769 | orchestrator | Interface Template Created: Ethernet15 - 10GBASE-T (10GE) - 2 - 15 2025-04-05 12:01:10.501617 | orchestrator | Interface Template Created: Ethernet16 - 10GBASE-T (10GE) - 2 - 16 2025-04-05 12:01:10.501841 | orchestrator | Interface Template Created: Ethernet17 - 10GBASE-T (10GE) - 2 - 17 2025-04-05 12:01:10.502409 | orchestrator | Interface Template Created: Ethernet18 - 10GBASE-T (10GE) - 2 - 18 2025-04-05 12:01:10.502971 | orchestrator | Interface Template Created: Ethernet19 - 10GBASE-T (10GE) - 2 - 19 2025-04-05 12:01:10.503444 | orchestrator | Interface Template Created: Ethernet20 - 10GBASE-T (10GE) - 2 - 20 2025-04-05 12:01:10.503810 | orchestrator | Interface Template Created: Ethernet21 - 10GBASE-T (10GE) - 2 - 21 2025-04-05 12:01:10.504505 | orchestrator | Interface Template Created: Ethernet22 - 10GBASE-T (10GE) - 2 - 22 2025-04-05 12:01:10.504814 | orchestrator | Interface Template Created: Ethernet23 - 10GBASE-T (10GE) - 2 - 23 2025-04-05 12:01:10.505505 | orchestrator | Interface Template Created: Ethernet24 - 10GBASE-T (10GE) - 2 - 24 2025-04-05 12:01:10.505799 | orchestrator | Interface Template Created: Ethernet25 - 10GBASE-T (10GE) - 2 - 25 2025-04-05 12:01:10.506349 | orchestrator | Interface Template Created: Ethernet26 - 10GBASE-T (10GE) - 2 - 26 2025-04-05 12:01:10.506763 | orchestrator | Interface Template Created: Ethernet27 - 10GBASE-T (10GE) - 2 - 27 2025-04-05 12:01:10.507248 | orchestrator | Interface Template Created: Ethernet28 - 10GBASE-T (10GE) - 2 - 28 2025-04-05 12:01:10.507737 | orchestrator | Interface Template Created: Ethernet29 - 10GBASE-T (10GE) - 2 - 29 2025-04-05 12:01:10.508074 | orchestrator | Interface Template Created: Ethernet30 - 10GBASE-T (10GE) - 2 - 30 2025-04-05 12:01:10.508604 | orchestrator | Interface Template Created: Ethernet31 - 10GBASE-T (10GE) - 2 - 31 2025-04-05 12:01:10.508945 | orchestrator | Interface Template Created: Ethernet32 - 10GBASE-T (10GE) - 2 - 32 2025-04-05 12:01:10.509457 | orchestrator | Interface Template Created: Ethernet33 - 10GBASE-T (10GE) - 2 - 33 2025-04-05 12:01:10.509730 | orchestrator | Interface Template Created: Ethernet34 - 10GBASE-T (10GE) - 2 - 34 2025-04-05 12:01:10.510166 | orchestrator | Interface Template Created: Ethernet35 - 10GBASE-T (10GE) - 2 - 35 2025-04-05 12:01:10.510653 | orchestrator | Interface Template Created: Ethernet36 - 10GBASE-T (10GE) - 2 - 36 2025-04-05 12:01:10.510947 | orchestrator | Interface Template Created: Ethernet37 - 10GBASE-T (10GE) - 2 - 37 2025-04-05 12:01:10.511394 | orchestrator | Interface Template Created: Ethernet38 - 10GBASE-T (10GE) - 2 - 38 2025-04-05 12:01:10.511827 | orchestrator | Interface Template Created: Ethernet39 - 10GBASE-T (10GE) - 2 - 39 2025-04-05 12:01:10.512111 | orchestrator | Interface Template Created: Ethernet40 - 10GBASE-T (10GE) - 2 - 40 2025-04-05 12:01:10.512567 | orchestrator | Interface Template Created: Ethernet41 - 10GBASE-T (10GE) - 2 - 41 2025-04-05 12:01:10.512977 | orchestrator | Interface Template Created: Ethernet42 - 10GBASE-T (10GE) - 2 - 42 2025-04-05 12:01:10.513501 | orchestrator | Interface Template Created: Ethernet43 - 10GBASE-T (10GE) - 2 - 43 2025-04-05 12:01:10.513732 | orchestrator | Interface Template Created: Ethernet44 - 10GBASE-T (10GE) - 2 - 44 2025-04-05 12:01:10.514089 | orchestrator | Interface Template Created: Ethernet45 - 10GBASE-T (10GE) - 2 - 45 2025-04-05 12:01:10.514497 | orchestrator | Interface Template Created: Ethernet46 - 10GBASE-T (10GE) - 2 - 46 2025-04-05 12:01:10.514830 | orchestrator | Interface Template Created: Ethernet47 - 10GBASE-T (10GE) - 2 - 47 2025-04-05 12:01:10.515258 | orchestrator | Interface Template Created: Ethernet48 - 10GBASE-T (10GE) - 2 - 48 2025-04-05 12:01:10.515668 | orchestrator | Interface Template Created: Ethernet49/1 - QSFP28 (100GE) - 2 - 49 2025-04-05 12:01:10.515957 | orchestrator | Interface Template Created: Ethernet50/1 - QSFP28 (100GE) - 2 - 50 2025-04-05 12:01:10.516364 | orchestrator | Interface Template Created: Ethernet51/1 - QSFP28 (100GE) - 2 - 51 2025-04-05 12:01:10.516761 | orchestrator | Interface Template Created: Ethernet52/1 - QSFP28 (100GE) - 2 - 52 2025-04-05 12:01:10.517060 | orchestrator | Interface Template Created: Ethernet53/1 - QSFP28 (100GE) - 2 - 53 2025-04-05 12:01:10.517462 | orchestrator | Interface Template Created: Ethernet54/1 - QSFP28 (100GE) - 2 - 54 2025-04-05 12:01:10.517782 | orchestrator | Interface Template Created: Ethernet55/1 - QSFP28 (100GE) - 2 - 55 2025-04-05 12:01:10.518088 | orchestrator | Interface Template Created: Ethernet56/1 - QSFP28 (100GE) - 2 - 56 2025-04-05 12:01:10.518526 | orchestrator | Interface Template Created: Management1 - 1000BASE-T (1GE) - 2 - 57 2025-04-05 12:01:10.518643 | orchestrator | Power Port Template Created: PS1 - C14 - 2 - 1 2025-04-05 12:01:10.519013 | orchestrator | Power Port Template Created: PS2 - C14 - 2 - 2 2025-04-05 12:01:10.519363 | orchestrator | Console Port Template Created: Console - RJ-45 - 2 - 1 2025-04-05 12:01:10.519672 | orchestrator | Device Type Created: Other - Baremetal-Device - 3 2025-04-05 12:01:10.520067 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 3 - 58 2025-04-05 12:01:10.520430 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 3 - 59 2025-04-05 12:01:10.520731 | orchestrator | Power Port Template Created: PS1 - C14 - 3 - 3 2025-04-05 12:01:10.521007 | orchestrator | Device Type Created: Other - Manager - 4 2025-04-05 12:01:10.521422 | orchestrator | Interface Template Created: Ethernet0 - 1000BASE-T (1GE) - 4 - 60 2025-04-05 12:01:10.521731 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 4 - 61 2025-04-05 12:01:10.522097 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 4 - 62 2025-04-05 12:01:10.523404 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 4 - 63 2025-04-05 12:01:10.523644 | orchestrator | Power Port Template Created: PS1 - C14 - 4 - 4 2025-04-05 12:01:10.523903 | orchestrator | Device Type Created: Other - Node - 5 2025-04-05 12:01:10.526092 | orchestrator | Interface Template Created: Ethernet0 - 1000BASE-T (1GE) - 5 - 64 2025-04-05 12:01:10.526537 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 5 - 65 2025-04-05 12:01:10.526831 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 5 - 66 2025-04-05 12:01:10.527616 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 5 - 67 2025-04-05 12:01:10.527802 | orchestrator | Power Port Template Created: PS1 - C14 - 5 - 5 2025-04-05 12:01:10.528182 | orchestrator | Device Type Created: Other - Baremetal-Housing - 6 2025-04-05 12:01:10.528677 | orchestrator | Interface Template Created: Ethernet0 - 1000BASE-T (1GE) - 6 - 68 2025-04-05 12:01:10.529080 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 6 - 69 2025-04-05 12:01:10.529515 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 6 - 70 2025-04-05 12:01:10.529980 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 6 - 71 2025-04-05 12:01:10.530275 | orchestrator | Power Port Template Created: PS1 - C14 - 6 - 6 2025-04-05 12:01:10.531216 | orchestrator | Manufacturer queued for addition: .gitkeep 2025-04-05 12:01:10.531679 | orchestrator | Manufacturer Created: .gitkeep - 4 2025-04-05 12:01:10.532291 | orchestrator | 2025-04-05 12:01:10.532613 | orchestrator | PLAY [Manage NetBox resources defined in 100-initialise.yml] ******************* 2025-04-05 12:01:10.533044 | orchestrator | 2025-04-05 12:01:10.533541 | orchestrator | TASK [Manage NetBox resource Discworld of type site] *************************** 2025-04-05 12:01:11.677881 | orchestrator | changed: [localhost] 2025-04-05 12:01:11.678274 | orchestrator | 2025-04-05 12:01:11.679043 | orchestrator | TASK [Manage NetBox resource Ankh-Morpork of type location] ******************** 2025-04-05 12:01:12.841346 | orchestrator | changed: [localhost] 2025-04-05 12:01:12.844891 | orchestrator | 2025-04-05 12:01:12.845479 | orchestrator | TASK [Manage NetBox resource of type prefix] *********************************** 2025-04-05 12:01:13.935571 | orchestrator | changed: [localhost] 2025-04-05 12:01:13.940506 | orchestrator | 2025-04-05 12:01:14.827246 | orchestrator | TASK [Manage NetBox resource of type prefix] *********************************** 2025-04-05 12:01:14.827367 | orchestrator | changed: [localhost] 2025-04-05 12:01:14.832420 | orchestrator | 2025-04-05 12:01:14.833259 | orchestrator | TASK [Manage NetBox resource of type prefix] *********************************** 2025-04-05 12:01:15.721210 | orchestrator | changed: [localhost] 2025-04-05 12:01:15.721582 | orchestrator | 2025-04-05 12:01:15.722477 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:01:16.763601 | orchestrator | changed: [localhost] 2025-04-05 12:01:16.766004 | orchestrator | 2025-04-05 12:01:16.766390 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:01:17.740365 | orchestrator | changed: [localhost] 2025-04-05 12:01:17.741501 | orchestrator | 2025-04-05 12:01:17.742249 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:01:17.742506 | orchestrator | 2025-04-05 12:01:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:01:17.742539 | orchestrator | 2025-04-05 12:01:17 | INFO  | Please wait and do not abort execution. 2025-04-05 12:01:17.742571 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:01:17.965951 | orchestrator | 2025-04-05 12:01:17 | INFO  | Handle file /netbox/resources/200-rack-1000.yml 2025-04-05 12:01:19.009642 | orchestrator | 2025-04-05 12:01:19.010116 | orchestrator | PLAY [Manage NetBox resources defined in 200-rack-1000.yml] ******************** 2025-04-05 12:01:19.057692 | orchestrator | 2025-04-05 12:01:19.058598 | orchestrator | TASK [Manage NetBox resource 1000 of type rack] ******************************** 2025-04-05 12:01:20.435033 | orchestrator | changed: [localhost] 2025-04-05 12:01:20.439201 | orchestrator | 2025-04-05 12:01:26.318558 | orchestrator | TASK [Manage NetBox resource testbed-switch-0 of type device] ****************** 2025-04-05 12:01:26.318805 | orchestrator | changed: [localhost] 2025-04-05 12:01:31.496243 | orchestrator | 2025-04-05 12:01:31.496828 | orchestrator | TASK [Manage NetBox resource testbed-switch-1 of type device] ****************** 2025-04-05 12:01:31.496878 | orchestrator | changed: [localhost] 2025-04-05 12:01:31.502696 | orchestrator | 2025-04-05 12:01:31.503945 | orchestrator | TASK [Manage NetBox resource testbed-switch-2 of type device] ****************** 2025-04-05 12:01:37.150281 | orchestrator | changed: [localhost] 2025-04-05 12:01:37.153518 | orchestrator | 2025-04-05 12:01:37.153987 | orchestrator | TASK [Manage NetBox resource testbed-switch-oob of type device] **************** 2025-04-05 12:01:42.126300 | orchestrator | changed: [localhost] 2025-04-05 12:01:42.131542 | orchestrator | 2025-04-05 12:01:42.133356 | orchestrator | TASK [Manage NetBox resource testbed-manager of type device] ******************* 2025-04-05 12:01:44.205805 | orchestrator | changed: [localhost] 2025-04-05 12:01:44.210871 | orchestrator | 2025-04-05 12:01:44.211272 | orchestrator | TASK [Manage NetBox resource testbed-node-0 of type device] ******************** 2025-04-05 12:01:46.720841 | orchestrator | changed: [localhost] 2025-04-05 12:01:48.973252 | orchestrator | 2025-04-05 12:01:48.973379 | orchestrator | TASK [Manage NetBox resource testbed-node-1 of type device] ******************** 2025-04-05 12:01:48.973419 | orchestrator | changed: [localhost] 2025-04-05 12:01:48.974513 | orchestrator | 2025-04-05 12:01:48.974634 | orchestrator | TASK [Manage NetBox resource testbed-node-2 of type device] ******************** 2025-04-05 12:01:51.148314 | orchestrator | changed: [localhost] 2025-04-05 12:01:51.153396 | orchestrator | 2025-04-05 12:01:51.154336 | orchestrator | TASK [Manage NetBox resource testbed-node-3 of type device] ******************** 2025-04-05 12:01:53.127507 | orchestrator | changed: [localhost] 2025-04-05 12:01:53.128998 | orchestrator | 2025-04-05 12:01:53.130102 | orchestrator | TASK [Manage NetBox resource testbed-node-4 of type device] ******************** 2025-04-05 12:01:55.155658 | orchestrator | changed: [localhost] 2025-04-05 12:01:55.156771 | orchestrator | 2025-04-05 12:01:55.157036 | orchestrator | TASK [Manage NetBox resource testbed-node-5 of type device] ******************** 2025-04-05 12:01:57.114892 | orchestrator | changed: [localhost] 2025-04-05 12:01:57.121333 | orchestrator | 2025-04-05 12:01:59.081054 | orchestrator | TASK [Manage NetBox resource testbed-node-6 of type device] ******************** 2025-04-05 12:01:59.081132 | orchestrator | changed: [localhost] 2025-04-05 12:01:59.081567 | orchestrator | 2025-04-05 12:01:59.082397 | orchestrator | TASK [Manage NetBox resource testbed-node-7 of type device] ******************** 2025-04-05 12:02:01.469377 | orchestrator | changed: [localhost] 2025-04-05 12:02:01.471285 | orchestrator | 2025-04-05 12:02:01.473039 | orchestrator | TASK [Manage NetBox resource testbed-node-8 of type device] ******************** 2025-04-05 12:02:04.007994 | orchestrator | changed: [localhost] 2025-04-05 12:02:04.010293 | orchestrator | 2025-04-05 12:02:04.011389 | orchestrator | TASK [Manage NetBox resource testbed-node-9 of type device] ******************** 2025-04-05 12:02:05.989405 | orchestrator | changed: [localhost] 2025-04-05 12:02:05.990359 | orchestrator | 2025-04-05 12:02:05.990403 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:02:05.990597 | orchestrator | 2025-04-05 12:02:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:02:05.990623 | orchestrator | 2025-04-05 12:02:05 | INFO  | Please wait and do not abort execution. 2025-04-05 12:02:05.990644 | orchestrator | localhost : ok=16 changed=16 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:02:06.225517 | orchestrator | 2025-04-05 12:02:06 | INFO  | Handle file /netbox/resources/300-testbed-node-9.yml 2025-04-05 12:02:06.229835 | orchestrator | 2025-04-05 12:02:06 | INFO  | Handle file /netbox/resources/300-testbed-switch-0.yml 2025-04-05 12:02:06.230676 | orchestrator | 2025-04-05 12:02:06 | INFO  | Handle file /netbox/resources/300-testbed-node-1.yml 2025-04-05 12:02:06.235046 | orchestrator | 2025-04-05 12:02:06 | INFO  | Handle file /netbox/resources/300-testbed-node-3.yml 2025-04-05 12:02:07.349142 | orchestrator | 2025-04-05 12:02:07.358794 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-switch-0.yml] ************* 2025-04-05 12:02:07.358838 | orchestrator | 2025-04-05 12:02:07.361567 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-1.yml] *************** 2025-04-05 12:02:07.361599 | orchestrator | 2025-04-05 12:02:07.362547 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-3.yml] *************** 2025-04-05 12:02:07.372672 | orchestrator | 2025-04-05 12:02:07.373301 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-9.yml] *************** 2025-04-05 12:02:07.402125 | orchestrator | 2025-04-05 12:02:07.409869 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:07.409929 | orchestrator | 2025-04-05 12:02:07.410089 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:07.417121 | orchestrator | 2025-04-05 12:02:07.417463 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:07.426391 | orchestrator | 2025-04-05 12:02:07.428348 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:10.109751 | orchestrator | changed: [localhost] 2025-04-05 12:02:10.112020 | orchestrator | 2025-04-05 12:02:10.113191 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:10.307573 | orchestrator | changed: [localhost] 2025-04-05 12:02:10.316014 | orchestrator | 2025-04-05 12:02:10.317608 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:10.566371 | orchestrator | changed: [localhost] 2025-04-05 12:02:10.571095 | orchestrator | 2025-04-05 12:02:10.571866 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:10.747714 | orchestrator | changed: [localhost] 2025-04-05 12:02:10.751080 | orchestrator | 2025-04-05 12:02:10.751555 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:12.480338 | orchestrator | changed: [localhost] 2025-04-05 12:02:12.481820 | orchestrator | 2025-04-05 12:02:12.793657 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:12.793778 | orchestrator | changed: [localhost] 2025-04-05 12:02:12.797866 | orchestrator | 2025-04-05 12:02:12.798098 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:12.899338 | orchestrator | changed: [localhost] 2025-04-05 12:02:12.907763 | orchestrator | 2025-04-05 12:02:12.908114 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:14.333278 | orchestrator | changed: [localhost] 2025-04-05 12:02:14.335883 | orchestrator | 2025-04-05 12:02:14.530205 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:14.530297 | orchestrator | changed: [localhost] 2025-04-05 12:02:14.533727 | orchestrator | 2025-04-05 12:02:14.533857 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:16.031960 | orchestrator | changed: [localhost] 2025-04-05 12:02:16.153162 | orchestrator | 2025-04-05 12:02:16.153260 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:16.153290 | orchestrator | changed: [localhost] 2025-04-05 12:02:16.155226 | orchestrator | 2025-04-05 12:02:16.155353 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:17.358678 | orchestrator | changed: [localhost] 2025-04-05 12:02:17.361837 | orchestrator | 2025-04-05 12:02:17.362186 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:02:17.362523 | orchestrator | 2025-04-05 12:02:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:02:17.362554 | orchestrator | 2025-04-05 12:02:17 | INFO  | Please wait and do not abort execution. 2025-04-05 12:02:17.362576 | orchestrator | localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:02:17.435845 | orchestrator | changed: [localhost] 2025-04-05 12:02:17.438395 | orchestrator | 2025-04-05 12:02:17.498531 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:17.498587 | orchestrator | changed: [localhost] 2025-04-05 12:02:17.505052 | orchestrator | 2025-04-05 12:02:17.552357 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:17.552391 | orchestrator | 2025-04-05 12:02:17 | INFO  | Handle file /netbox/resources/300-testbed-node-6.yml 2025-04-05 12:02:18.399042 | orchestrator | 2025-04-05 12:02:18.400115 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-6.yml] *************** 2025-04-05 12:02:18.445345 | orchestrator | 2025-04-05 12:02:18.446438 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:18.637193 | orchestrator | changed: [localhost] 2025-04-05 12:02:18.639763 | orchestrator | 2025-04-05 12:02:18.640456 | orchestrator | TASK [Manage NetBox resource testbed-node-1 of type device] ******************** 2025-04-05 12:02:18.680521 | orchestrator | changed: [localhost] 2025-04-05 12:02:18.685683 | orchestrator | 2025-04-05 12:02:19.630661 | orchestrator | TASK [Manage NetBox resource testbed-node-3 of type device] ******************** 2025-04-05 12:02:19.630795 | orchestrator | changed: [localhost] 2025-04-05 12:02:20.048517 | orchestrator | 2025-04-05 12:02:20.048619 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:20.048649 | orchestrator | changed: [localhost] 2025-04-05 12:02:20.049744 | orchestrator | 2025-04-05 12:02:20.049839 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:02:20.050127 | orchestrator | 2025-04-05 12:02:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:02:20.051450 | orchestrator | 2025-04-05 12:02:20 | INFO  | Please wait and do not abort execution. 2025-04-05 12:02:20.051479 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:02:20.173977 | orchestrator | changed: [localhost] 2025-04-05 12:02:20.177535 | orchestrator | 2025-04-05 12:02:20.177683 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:02:20.178064 | orchestrator | 2025-04-05 12:02:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:02:20.178098 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:02:20.231606 | orchestrator | 2025-04-05 12:02:20 | INFO  | Please wait and do not abort execution. 2025-04-05 12:02:20.231639 | orchestrator | 2025-04-05 12:02:20 | INFO  | Handle file /netbox/resources/300-testbed-switch-2.yml 2025-04-05 12:02:20.299802 | orchestrator | changed: [localhost] 2025-04-05 12:02:20.303845 | orchestrator | 2025-04-05 12:02:20.304479 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:20.357256 | orchestrator | 2025-04-05 12:02:20 | INFO  | Handle file /netbox/resources/300-testbed-node-5.yml 2025-04-05 12:02:21.099818 | orchestrator | changed: [localhost] 2025-04-05 12:02:21.100048 | orchestrator | 2025-04-05 12:02:21.100079 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:21.176103 | orchestrator | 2025-04-05 12:02:21.176388 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-switch-2.yml] ************* 2025-04-05 12:02:21.229788 | orchestrator | 2025-04-05 12:02:21.230012 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:21.292936 | orchestrator | 2025-04-05 12:02:21.345650 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-5.yml] *************** 2025-04-05 12:02:21.345744 | orchestrator | 2025-04-05 12:02:21.347092 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:21.865547 | orchestrator | changed: [localhost] 2025-04-05 12:02:21.868012 | orchestrator | 2025-04-05 12:02:21.870226 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:22.339958 | orchestrator | changed: [localhost] 2025-04-05 12:02:22.340313 | orchestrator | 2025-04-05 12:02:22.340422 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:23.013877 | orchestrator | changed: [localhost] 2025-04-05 12:02:23.015327 | orchestrator | 2025-04-05 12:02:23.015409 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:23.235075 | orchestrator | changed: [localhost] 2025-04-05 12:02:23.238572 | orchestrator | 2025-04-05 12:02:23.238844 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:23.551450 | orchestrator | changed: [localhost] 2025-04-05 12:02:23.554189 | orchestrator | 2025-04-05 12:02:23.554571 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:23.703644 | orchestrator | changed: [localhost] 2025-04-05 12:02:23.708369 | orchestrator | 2025-04-05 12:02:23.710072 | orchestrator | TASK [Manage NetBox resource testbed-node-9 of type device] ******************** 2025-04-05 12:02:24.575777 | orchestrator | changed: [localhost] 2025-04-05 12:02:24.576381 | orchestrator | 2025-04-05 12:02:24.576616 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:24.611486 | orchestrator | changed: [localhost] 2025-04-05 12:02:24.620840 | orchestrator | 2025-04-05 12:02:25.141212 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:25.141330 | orchestrator | changed: [localhost] 2025-04-05 12:02:25.144145 | orchestrator | 2025-04-05 12:02:25.144378 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:25.150521 | orchestrator | changed: [localhost] 2025-04-05 12:02:25.153746 | orchestrator | 2025-04-05 12:02:25.365049 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:02:25.365123 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:02:25.365137 | orchestrator | 2025-04-05 12:02:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:02:25.365152 | orchestrator | 2025-04-05 12:02:25 | INFO  | Please wait and do not abort execution. 2025-04-05 12:02:25.365176 | orchestrator | 2025-04-05 12:02:25 | INFO  | Handle file /netbox/resources/300-testbed-node-8.yml 2025-04-05 12:02:26.236243 | orchestrator | 2025-04-05 12:02:26.274850 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-8.yml] *************** 2025-04-05 12:02:26.274945 | orchestrator | 2025-04-05 12:02:26.275370 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:26.329547 | orchestrator | changed: [localhost] 2025-04-05 12:02:26.555759 | orchestrator | 2025-04-05 12:02:26.555816 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:26.555841 | orchestrator | changed: [localhost] 2025-04-05 12:02:26.558842 | orchestrator | 2025-04-05 12:02:26.558988 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:26.620530 | orchestrator | changed: [localhost] 2025-04-05 12:02:26.624658 | orchestrator | 2025-04-05 12:02:26.625214 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:02:26.625467 | orchestrator | 2025-04-05 12:02:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:02:26.625494 | orchestrator | 2025-04-05 12:02:26 | INFO  | Please wait and do not abort execution. 2025-04-05 12:02:26.625514 | orchestrator | localhost : ok=3 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:02:26.812779 | orchestrator | 2025-04-05 12:02:26 | INFO  | Handle file /netbox/resources/300-testbed-node-0.yml 2025-04-05 12:02:27.728160 | orchestrator | changed: [localhost] 2025-04-05 12:02:27.728335 | orchestrator | 2025-04-05 12:02:27.728361 | orchestrator | TASK [Manage NetBox resource testbed-node-6 of type device] ******************** 2025-04-05 12:02:27.792923 | orchestrator | 2025-04-05 12:02:27.839435 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-0.yml] *************** 2025-04-05 12:02:27.839486 | orchestrator | 2025-04-05 12:02:27.842988 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:27.910171 | orchestrator | changed: [localhost] 2025-04-05 12:02:27.912872 | orchestrator | 2025-04-05 12:02:27.913128 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:28.130219 | orchestrator | changed: [localhost] 2025-04-05 12:02:28.135575 | orchestrator | 2025-04-05 12:02:28.135659 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:29.128264 | orchestrator | changed: [localhost] 2025-04-05 12:02:29.130530 | orchestrator | 2025-04-05 12:02:29.169703 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:29.169849 | orchestrator | changed: [localhost] 2025-04-05 12:02:29.169978 | orchestrator | 2025-04-05 12:02:29.170173 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:02:29.170398 | orchestrator | 2025-04-05 12:02:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:02:29.170732 | orchestrator | 2025-04-05 12:02:29 | INFO  | Please wait and do not abort execution. 2025-04-05 12:02:29.171063 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:02:29.371098 | orchestrator | 2025-04-05 12:02:29 | INFO  | Handle file /netbox/resources/300-testbed-manager.yml 2025-04-05 12:02:29.641200 | orchestrator | changed: [localhost] 2025-04-05 12:02:29.648105 | orchestrator | 2025-04-05 12:02:29.767271 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:29.767365 | orchestrator | changed: [localhost] 2025-04-05 12:02:30.101395 | orchestrator | 2025-04-05 12:02:30.101497 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:30.101529 | orchestrator | changed: [localhost] 2025-04-05 12:02:30.236042 | orchestrator | 2025-04-05 12:02:30.236106 | orchestrator | TASK [Manage NetBox resource testbed-node-5 of type device] ******************** 2025-04-05 12:02:30.236133 | orchestrator | 2025-04-05 12:02:30.274201 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-manager.yml] ************** 2025-04-05 12:02:30.274250 | orchestrator | 2025-04-05 12:02:31.221289 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:31.222077 | orchestrator | changed: [localhost] 2025-04-05 12:02:31.224698 | orchestrator | 2025-04-05 12:02:31.225415 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:31.386762 | orchestrator | changed: [localhost] 2025-04-05 12:02:31.387151 | orchestrator | 2025-04-05 12:02:31.387391 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:31.773856 | orchestrator | changed: [localhost] 2025-04-05 12:02:31.774233 | orchestrator | 2025-04-05 12:02:31.774436 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:02:31.774502 | orchestrator | 2025-04-05 12:02:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:02:31.774579 | orchestrator | 2025-04-05 12:02:31 | INFO  | Please wait and do not abort execution. 2025-04-05 12:02:31.775984 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:02:31.958780 | orchestrator | 2025-04-05 12:02:31 | INFO  | Handle file /netbox/resources/300-testbed-node-4.yml 2025-04-05 12:02:32.217874 | orchestrator | changed: [localhost] 2025-04-05 12:02:32.222447 | orchestrator | 2025-04-05 12:02:32.222776 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:32.845750 | orchestrator | changed: [localhost] 2025-04-05 12:02:32.849469 | orchestrator | 2025-04-05 12:02:32.975092 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:32.975150 | orchestrator | changed: [localhost] 2025-04-05 12:02:32.980102 | orchestrator | 2025-04-05 12:02:32.980185 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:33.028222 | orchestrator | 2025-04-05 12:02:33.028610 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-4.yml] *************** 2025-04-05 12:02:33.064227 | orchestrator | 2025-04-05 12:02:33.064470 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:34.363274 | orchestrator | changed: [localhost] 2025-04-05 12:02:34.364098 | orchestrator | 2025-04-05 12:02:34.366608 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:34.635051 | orchestrator | changed: [localhost] 2025-04-05 12:02:34.636031 | orchestrator | 2025-04-05 12:02:34.636343 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:34.716254 | orchestrator | changed: [localhost] 2025-04-05 12:02:34.719263 | orchestrator | 2025-04-05 12:02:34.809745 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:34.809836 | orchestrator | changed: [localhost] 2025-04-05 12:02:34.811495 | orchestrator | 2025-04-05 12:02:34.812265 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:35.836549 | orchestrator | changed: [localhost] 2025-04-05 12:02:35.836709 | orchestrator | 2025-04-05 12:02:35.837489 | orchestrator | TASK [Manage NetBox resource testbed-node-8 of type device] ******************** 2025-04-05 12:02:35.901920 | orchestrator | changed: [localhost] 2025-04-05 12:02:35.904812 | orchestrator | 2025-04-05 12:02:35.904952 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:35.995646 | orchestrator | changed: [localhost] 2025-04-05 12:02:36.002144 | orchestrator | 2025-04-05 12:02:36.318375 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:36.319160 | orchestrator | changed: [localhost] 2025-04-05 12:02:36.319369 | orchestrator | 2025-04-05 12:02:36.319446 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:36.865815 | orchestrator | changed: [localhost] 2025-04-05 12:02:36.868354 | orchestrator | 2025-04-05 12:02:37.445976 | orchestrator | TASK [Manage NetBox resource testbed-node-0 of type device] ******************** 2025-04-05 12:02:37.446156 | orchestrator | changed: [localhost] 2025-04-05 12:02:37.446318 | orchestrator | 2025-04-05 12:02:37.446348 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:02:37.446723 | orchestrator | 2025-04-05 12:02:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:02:37.446749 | orchestrator | 2025-04-05 12:02:37 | INFO  | Please wait and do not abort execution. 2025-04-05 12:02:37.446770 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:02:37.474946 | orchestrator | changed: [localhost] 2025-04-05 12:02:37.475798 | orchestrator | 2025-04-05 12:02:37.641598 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:37.641634 | orchestrator | 2025-04-05 12:02:37 | INFO  | Handle file /netbox/resources/300-testbed-node-7.yml 2025-04-05 12:02:38.117784 | orchestrator | changed: [localhost] 2025-04-05 12:02:38.118696 | orchestrator | 2025-04-05 12:02:38.188156 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:38.188223 | orchestrator | changed: [localhost] 2025-04-05 12:02:38.193648 | orchestrator | 2025-04-05 12:02:38.193987 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:02:38.194112 | orchestrator | 2025-04-05 12:02:38 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:02:38.194140 | orchestrator | 2025-04-05 12:02:38 | INFO  | Please wait and do not abort execution. 2025-04-05 12:02:38.194356 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:02:38.375865 | orchestrator | 2025-04-05 12:02:38 | INFO  | Handle file /netbox/resources/300-testbed-node-2.yml 2025-04-05 12:02:38.500122 | orchestrator | 2025-04-05 12:02:38.565025 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-7.yml] *************** 2025-04-05 12:02:38.565063 | orchestrator | 2025-04-05 12:02:38.566811 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:38.673066 | orchestrator | changed: [localhost] 2025-04-05 12:02:38.679362 | orchestrator | 2025-04-05 12:02:38.679624 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:39.197589 | orchestrator | 2025-04-05 12:02:39.238084 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-2.yml] *************** 2025-04-05 12:02:39.238155 | orchestrator | 2025-04-05 12:02:39.240402 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:39.528689 | orchestrator | changed: [localhost] 2025-04-05 12:02:39.529844 | orchestrator | 2025-04-05 12:02:39.530123 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:39.737415 | orchestrator | changed: [localhost] 2025-04-05 12:02:39.737728 | orchestrator | 2025-04-05 12:02:39.738370 | orchestrator | TASK [Manage NetBox resource testbed-manager of type device] ******************* 2025-04-05 12:02:40.340066 | orchestrator | changed: [localhost] 2025-04-05 12:02:40.341011 | orchestrator | 2025-04-05 12:02:40.341631 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:40.921750 | orchestrator | changed: [localhost] 2025-04-05 12:02:40.925087 | orchestrator | 2025-04-05 12:02:40.987780 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:40.987818 | orchestrator | changed: [localhost] 2025-04-05 12:02:40.988487 | orchestrator | 2025-04-05 12:02:40.988734 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:02:40.988972 | orchestrator | 2025-04-05 12:02:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:02:40.989452 | orchestrator | 2025-04-05 12:02:40 | INFO  | Please wait and do not abort execution. 2025-04-05 12:02:40.989481 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:02:41.064121 | orchestrator | changed: [localhost] 2025-04-05 12:02:41.067202 | orchestrator | 2025-04-05 12:02:41.068707 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:41.184974 | orchestrator | 2025-04-05 12:02:41 | INFO  | Handle file /netbox/resources/300-testbed-switch-1.yml 2025-04-05 12:02:41.721323 | orchestrator | changed: [localhost] 2025-04-05 12:02:41.724040 | orchestrator | 2025-04-05 12:02:41.724616 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:41.983698 | orchestrator | 2025-04-05 12:02:42.015731 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-switch-1.yml] ************* 2025-04-05 12:02:42.015795 | orchestrator | 2025-04-05 12:02:42.016648 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:42.090498 | orchestrator | changed: [localhost] 2025-04-05 12:02:42.094781 | orchestrator | 2025-04-05 12:02:42.095064 | orchestrator | TASK [Manage NetBox resource testbed-node-4 of type device] ******************** 2025-04-05 12:02:42.545659 | orchestrator | changed: [localhost] 2025-04-05 12:02:42.552664 | orchestrator | 2025-04-05 12:02:42.554070 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:43.401310 | orchestrator | changed: [localhost] 2025-04-05 12:02:43.402277 | orchestrator | 2025-04-05 12:02:43.402401 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:02:43.402542 | orchestrator | 2025-04-05 12:02:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:02:43.402833 | orchestrator | 2025-04-05 12:02:43 | INFO  | Please wait and do not abort execution. 2025-04-05 12:02:43.402864 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:02:43.445958 | orchestrator | changed: [localhost] 2025-04-05 12:02:43.449269 | orchestrator | 2025-04-05 12:02:43.449482 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:43.910590 | orchestrator | changed: [localhost] 2025-04-05 12:02:43.912539 | orchestrator | 2025-04-05 12:02:43.913138 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:44.054843 | orchestrator | changed: [localhost] 2025-04-05 12:02:44.056981 | orchestrator | 2025-04-05 12:02:44.057578 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-05 12:02:45.021622 | orchestrator | changed: [localhost] 2025-04-05 12:02:45.022173 | orchestrator | 2025-04-05 12:02:45.022218 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:45.082416 | orchestrator | changed: [localhost] 2025-04-05 12:02:45.082766 | orchestrator | 2025-04-05 12:02:45.082798 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:02:45.083098 | orchestrator | 2025-04-05 12:02:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:02:45.083561 | orchestrator | 2025-04-05 12:02:45 | INFO  | Please wait and do not abort execution. 2025-04-05 12:02:45.483269 | orchestrator | localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:02:45.483404 | orchestrator | changed: [localhost] 2025-04-05 12:02:45.486573 | orchestrator | 2025-04-05 12:02:45.486794 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:46.362258 | orchestrator | changed: [localhost] 2025-04-05 12:02:46.367098 | orchestrator | 2025-04-05 12:02:46.367659 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:46.812399 | orchestrator | changed: [localhost] 2025-04-05 12:02:46.824566 | orchestrator | 2025-04-05 12:02:46.826293 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-05 12:02:47.811929 | orchestrator | changed: [localhost] 2025-04-05 12:02:47.812680 | orchestrator | 2025-04-05 12:02:47.813322 | orchestrator | TASK [Manage NetBox resource testbed-node-7 of type device] ******************** 2025-04-05 12:02:48.008815 | orchestrator | changed: [localhost] 2025-04-05 12:02:48.009937 | orchestrator | 2025-04-05 12:02:49.392480 | orchestrator | TASK [Manage NetBox resource testbed-node-2 of type device] ******************** 2025-04-05 12:02:49.392612 | orchestrator | changed: [localhost] 2025-04-05 12:02:49.392977 | orchestrator | 2025-04-05 12:02:49.393153 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:02:49.393448 | orchestrator | 2025-04-05 12:02:49 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:02:49.394546 | orchestrator | 2025-04-05 12:02:49 | INFO  | Please wait and do not abort execution. 2025-04-05 12:02:49.394802 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:02:49.443363 | orchestrator | changed: [localhost] 2025-04-05 12:02:49.444230 | orchestrator | 2025-04-05 12:02:49.445011 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:02:49.445297 | orchestrator | 2025-04-05 12:02:49 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:02:49.445582 | orchestrator | 2025-04-05 12:02:49 | INFO  | Please wait and do not abort execution. 2025-04-05 12:02:49.445621 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:02:49.622988 | orchestrator | 2025-04-05 12:02:49 | INFO  | Runtime: 112.0413s 2025-04-05 12:02:49.863857 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-04-05 12:02:50.050587 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-04-05 12:02:50.055190 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:quincy "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 3 minutes (healthy) 2025-04-05 12:02:50.055226 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.1 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 3 minutes (healthy) 2025-04-05 12:02:50.055241 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-04-05 12:02:50.055256 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server 3 minutes ago Up 3 minutes (healthy) 8000/tcp 2025-04-05 12:02:50.055281 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2025-04-05 12:02:50.055294 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" conductor 3 minutes ago Up 3 minutes (healthy) 2025-04-05 12:02:50.055334 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2025-04-05 12:02:50.055348 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2025-04-05 12:02:50.055360 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 3 minutes ago Up 3 minutes (healthy) 2025-04-05 12:02:50.055372 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb 3 minutes ago Up 3 minutes (healthy) 3306/tcp 2025-04-05 12:02:50.055385 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" netbox 3 minutes ago Up 3 minutes (healthy) 2025-04-05 12:02:50.055398 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2025-04-05 12:02:50.055414 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 3 minutes ago Up 3 minutes (healthy) 6379/tcp 2025-04-05 12:02:50.055427 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" watchdog 3 minutes ago Up 3 minutes (healthy) 2025-04-05 12:02:50.055440 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 3 minutes (healthy) 2025-04-05 12:02:50.055452 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 3 minutes (healthy) 2025-04-05 12:02:50.055465 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2025-04-05 12:02:50.055485 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-04-05 12:02:50.186491 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-04-05 12:02:50.193781 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.2.2 "/usr/bin/tini -- /o…" netbox 9 minutes ago Up 8 minutes (healthy) 2025-04-05 12:02:50.193826 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.2.2 "/opt/netbox/venv/bi…" netbox-worker 9 minutes ago Up 5 minutes (healthy) 2025-04-05 12:02:50.193842 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.8-alpine "docker-entrypoint.s…" postgres 9 minutes ago Up 8 minutes (healthy) 5432/tcp 2025-04-05 12:02:50.193858 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 9 minutes ago Up 8 minutes (healthy) 6379/tcp 2025-04-05 12:02:50.193881 | orchestrator | ++ semver latest 7.0.0 2025-04-05 12:02:50.241156 | orchestrator | + [[ -1 -ge 0 ]] 2025-04-05 12:02:50.244515 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-04-05 12:02:50.244554 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-04-05 12:02:50.244578 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-04-05 12:02:51.765792 | orchestrator | 2025-04-05 12:02:51 | INFO  | Task 786795bd-e8f3-40ca-aeca-c9cc044977c3 (resolvconf) was prepared for execution. 2025-04-05 12:02:55.287299 | orchestrator | 2025-04-05 12:02:51 | INFO  | It takes a moment until task 786795bd-e8f3-40ca-aeca-c9cc044977c3 (resolvconf) has been started and output is visible here. 2025-04-05 12:02:55.287461 | orchestrator | 2025-04-05 12:02:55.287966 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-04-05 12:02:55.289784 | orchestrator | 2025-04-05 12:02:55.290913 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-05 12:02:55.291648 | orchestrator | Saturday 05 April 2025 12:02:55 +0000 (0:00:00.108) 0:00:00.108 ******** 2025-04-05 12:02:59.821830 | orchestrator | ok: [testbed-manager] 2025-04-05 12:02:59.822107 | orchestrator | 2025-04-05 12:02:59.822386 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-04-05 12:02:59.823002 | orchestrator | Saturday 05 April 2025 12:02:59 +0000 (0:00:04.537) 0:00:04.646 ******** 2025-04-05 12:02:59.872418 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:02:59.873119 | orchestrator | 2025-04-05 12:02:59.873987 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-04-05 12:02:59.874752 | orchestrator | Saturday 05 April 2025 12:02:59 +0000 (0:00:00.052) 0:00:04.698 ******** 2025-04-05 12:02:59.958146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-04-05 12:02:59.959031 | orchestrator | 2025-04-05 12:02:59.959824 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-04-05 12:02:59.960695 | orchestrator | Saturday 05 April 2025 12:02:59 +0000 (0:00:00.085) 0:00:04.784 ******** 2025-04-05 12:03:00.020272 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-04-05 12:03:00.021203 | orchestrator | 2025-04-05 12:03:00.021961 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-04-05 12:03:00.022455 | orchestrator | Saturday 05 April 2025 12:03:00 +0000 (0:00:00.061) 0:00:04.846 ******** 2025-04-05 12:03:00.922929 | orchestrator | ok: [testbed-manager] 2025-04-05 12:03:00.923087 | orchestrator | 2025-04-05 12:03:00.923251 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-04-05 12:03:00.923282 | orchestrator | Saturday 05 April 2025 12:03:00 +0000 (0:00:00.900) 0:00:05.747 ******** 2025-04-05 12:03:00.988709 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:03:00.989773 | orchestrator | 2025-04-05 12:03:00.990498 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-04-05 12:03:00.991159 | orchestrator | Saturday 05 April 2025 12:03:00 +0000 (0:00:00.067) 0:00:05.814 ******** 2025-04-05 12:03:01.394398 | orchestrator | ok: [testbed-manager] 2025-04-05 12:03:01.394948 | orchestrator | 2025-04-05 12:03:01.395846 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-04-05 12:03:01.396465 | orchestrator | Saturday 05 April 2025 12:03:01 +0000 (0:00:00.404) 0:00:06.219 ******** 2025-04-05 12:03:01.465559 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:03:01.466091 | orchestrator | 2025-04-05 12:03:01.466143 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-04-05 12:03:01.466444 | orchestrator | Saturday 05 April 2025 12:03:01 +0000 (0:00:00.070) 0:00:06.290 ******** 2025-04-05 12:03:01.934289 | orchestrator | changed: [testbed-manager] 2025-04-05 12:03:01.934466 | orchestrator | 2025-04-05 12:03:01.935133 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-04-05 12:03:01.935508 | orchestrator | Saturday 05 April 2025 12:03:01 +0000 (0:00:00.467) 0:00:06.757 ******** 2025-04-05 12:03:02.986574 | orchestrator | changed: [testbed-manager] 2025-04-05 12:03:02.989537 | orchestrator | 2025-04-05 12:03:03.915772 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-04-05 12:03:03.915993 | orchestrator | Saturday 05 April 2025 12:03:02 +0000 (0:00:01.053) 0:00:07.811 ******** 2025-04-05 12:03:03.916028 | orchestrator | ok: [testbed-manager] 2025-04-05 12:03:03.916112 | orchestrator | 2025-04-05 12:03:03.917781 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-04-05 12:03:03.917838 | orchestrator | Saturday 05 April 2025 12:03:03 +0000 (0:00:00.928) 0:00:08.740 ******** 2025-04-05 12:03:03.990771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-04-05 12:03:03.991515 | orchestrator | 2025-04-05 12:03:03.991546 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-04-05 12:03:03.991990 | orchestrator | Saturday 05 April 2025 12:03:03 +0000 (0:00:00.076) 0:00:08.816 ******** 2025-04-05 12:03:05.100235 | orchestrator | changed: [testbed-manager] 2025-04-05 12:03:05.100964 | orchestrator | 2025-04-05 12:03:05.101007 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:03:05.101987 | orchestrator | 2025-04-05 12:03:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:03:05.103658 | orchestrator | 2025-04-05 12:03:05 | INFO  | Please wait and do not abort execution. 2025-04-05 12:03:05.103699 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-05 12:03:05.103876 | orchestrator | 2025-04-05 12:03:05.105320 | orchestrator | 2025-04-05 12:03:05.106103 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:03:05.110177 | orchestrator | Saturday 05 April 2025 12:03:05 +0000 (0:00:01.106) 0:00:09.923 ******** 2025-04-05 12:03:05.110388 | orchestrator | =============================================================================== 2025-04-05 12:03:05.110703 | orchestrator | Gathering Facts --------------------------------------------------------- 4.54s 2025-04-05 12:03:05.110810 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.11s 2025-04-05 12:03:05.111330 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.05s 2025-04-05 12:03:05.111725 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.93s 2025-04-05 12:03:05.112086 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.90s 2025-04-05 12:03:05.112461 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.47s 2025-04-05 12:03:05.112860 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.40s 2025-04-05 12:03:05.113261 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-04-05 12:03:05.113968 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-04-05 12:03:05.115009 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2025-04-05 12:03:05.115941 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-04-05 12:03:05.116872 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2025-04-05 12:03:05.117514 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-04-05 12:03:05.500240 | orchestrator | + osism apply sshconfig 2025-04-05 12:03:07.054092 | orchestrator | 2025-04-05 12:03:07 | INFO  | Task c25da7e8-4beb-4d6b-bb93-7905371e1d9a (sshconfig) was prepared for execution. 2025-04-05 12:03:10.553269 | orchestrator | 2025-04-05 12:03:07 | INFO  | It takes a moment until task c25da7e8-4beb-4d6b-bb93-7905371e1d9a (sshconfig) has been started and output is visible here. 2025-04-05 12:03:10.554126 | orchestrator | 2025-04-05 12:03:10.554995 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-04-05 12:03:10.555035 | orchestrator | 2025-04-05 12:03:10.556788 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-04-05 12:03:10.557805 | orchestrator | Saturday 05 April 2025 12:03:10 +0000 (0:00:00.119) 0:00:00.119 ******** 2025-04-05 12:03:11.066447 | orchestrator | ok: [testbed-manager] 2025-04-05 12:03:11.067025 | orchestrator | 2025-04-05 12:03:11.067065 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-04-05 12:03:11.067724 | orchestrator | Saturday 05 April 2025 12:03:11 +0000 (0:00:00.515) 0:00:00.635 ******** 2025-04-05 12:03:11.493159 | orchestrator | changed: [testbed-manager] 2025-04-05 12:03:11.493285 | orchestrator | 2025-04-05 12:03:11.494810 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-04-05 12:03:11.495084 | orchestrator | Saturday 05 April 2025 12:03:11 +0000 (0:00:00.427) 0:00:01.063 ******** 2025-04-05 12:03:16.619546 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-04-05 12:03:16.620689 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-04-05 12:03:16.623415 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-04-05 12:03:16.623868 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-04-05 12:03:16.625575 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-04-05 12:03:16.625869 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-04-05 12:03:16.626210 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-04-05 12:03:16.626680 | orchestrator | 2025-04-05 12:03:16.626995 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-04-05 12:03:16.627393 | orchestrator | Saturday 05 April 2025 12:03:16 +0000 (0:00:05.124) 0:00:06.188 ******** 2025-04-05 12:03:16.693435 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:03:16.694434 | orchestrator | 2025-04-05 12:03:16.694463 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-04-05 12:03:16.695137 | orchestrator | Saturday 05 April 2025 12:03:16 +0000 (0:00:00.074) 0:00:06.262 ******** 2025-04-05 12:03:17.239011 | orchestrator | changed: [testbed-manager] 2025-04-05 12:03:17.239435 | orchestrator | 2025-04-05 12:03:17.239462 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:03:17.239484 | orchestrator | 2025-04-05 12:03:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:03:17.239777 | orchestrator | 2025-04-05 12:03:17 | INFO  | Please wait and do not abort execution. 2025-04-05 12:03:17.240840 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-05 12:03:17.241806 | orchestrator | 2025-04-05 12:03:17.242727 | orchestrator | 2025-04-05 12:03:17.243470 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:03:17.244197 | orchestrator | Saturday 05 April 2025 12:03:17 +0000 (0:00:00.544) 0:00:06.807 ******** 2025-04-05 12:03:17.244846 | orchestrator | =============================================================================== 2025-04-05 12:03:17.245622 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.12s 2025-04-05 12:03:17.245962 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.55s 2025-04-05 12:03:17.246724 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.52s 2025-04-05 12:03:17.247504 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.43s 2025-04-05 12:03:17.247904 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-04-05 12:03:17.619177 | orchestrator | + osism apply known-hosts 2025-04-05 12:03:19.190380 | orchestrator | 2025-04-05 12:03:19 | INFO  | Task 11f194b7-1929-4168-827c-4853c82fd36b (known-hosts) was prepared for execution. 2025-04-05 12:03:22.897749 | orchestrator | 2025-04-05 12:03:19 | INFO  | It takes a moment until task 11f194b7-1929-4168-827c-4853c82fd36b (known-hosts) has been started and output is visible here. 2025-04-05 12:03:22.897928 | orchestrator | 2025-04-05 12:03:22.898835 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-04-05 12:03:22.899704 | orchestrator | 2025-04-05 12:03:22.900846 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-04-05 12:03:22.901741 | orchestrator | Saturday 05 April 2025 12:03:22 +0000 (0:00:00.159) 0:00:00.159 ******** 2025-04-05 12:03:28.226967 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-04-05 12:03:28.227347 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-04-05 12:03:28.227370 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-04-05 12:03:28.227638 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-04-05 12:03:28.228088 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-04-05 12:03:28.228496 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-04-05 12:03:28.229058 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-04-05 12:03:28.229531 | orchestrator | 2025-04-05 12:03:28.230793 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-04-05 12:03:28.231913 | orchestrator | Saturday 05 April 2025 12:03:28 +0000 (0:00:05.330) 0:00:05.489 ******** 2025-04-05 12:03:28.391728 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-04-05 12:03:28.392449 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-04-05 12:03:28.393220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-04-05 12:03:28.393703 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-04-05 12:03:28.395455 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-04-05 12:03:28.395649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-04-05 12:03:28.396162 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-04-05 12:03:28.396863 | orchestrator | 2025-04-05 12:03:28.397373 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-05 12:03:28.397713 | orchestrator | Saturday 05 April 2025 12:03:28 +0000 (0:00:00.168) 0:00:05.658 ******** 2025-04-05 12:03:29.522309 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvBJs3uYLXegz34ZUbQDkU5DN5W/W1uVzXl+X6MMyZX6ceEqmUFGnZ/8/NGfySXrhnJPLvZsvAHPeH2RAT2lOL4V490D/J8InImi6qvwWVjCmBromL02L5YmZH8awiyiZZ8y456EIzBMI4P34FK5jk05hpDbnCb4I/wvBhI27f51YuzX9B2X+2izZxXf2TVu30cqOeuL5uvkTNK9y8AuYUYOWo469NgVs7kL72xss4sEvaKk5Oafl7PW0h3sWCjAtGay2XYiihhysyD0tiYYPfYECtqeXar2nYbPPu9wroZwPAvI80lF1t2ENXQznKW0ZZNWIJrRWxEO4jht2dMA192TsNKapFVVs5PoMChRCEoSIMgro7/3xeAQJvq+exCgvBHrsFdc6WPSgVnDOXUzL01FZgEpi2oxUz39AQ/8Geqe7k2CMSVM9tlc8azwBdZsDXuhX3mc93A2skhGeWIgZSdKWPz05ug9AGx3791Gbbet71FUWzl9LtNuxBu0+Cobs=) 2025-04-05 12:03:29.523143 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCOMq5nMOd4dz6GiRYJZ4KOIvghH7unA3M/yKCxxP1OgescufFuEMthkSibIUNdjbvTln3UbJ7iR+fjn53Dg6is=) 2025-04-05 12:03:29.523174 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKFXhz4ntLvm+kFZlLDGfdrwCLAOEYmu8D96SKKWpSOC) 2025-04-05 12:03:29.523620 | orchestrator | 2025-04-05 12:03:29.524529 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-05 12:03:29.524779 | orchestrator | Saturday 05 April 2025 12:03:29 +0000 (0:00:01.128) 0:00:06.786 ******** 2025-04-05 12:03:30.545695 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMAfDH/u7eg+fQnrEcJ/3HHgGFdQ/ErhOrqbj4hm+4LV9BAGj4kwkYJ4Mrfyj8PJ4fnkD7m74n1UrdOjJVjnogpYGLgqib8/902iMd7AfEcJttRS0mii0XDjFfr3jtUdKVUeQO6u/6dhEcWCRaIHfUspFY7mcftMqdnygLKtG7e8/HgnToqD10PqNHA70pHSBrYqb73XgxR0czwKAyTkJihq593NqNHFahKW00JnPErO48r6p8zKO1CcZv1g0EyYbK/VsPhdBGmgNpagCZ00MQDS829X7BZA0IucCPQbCHumlEogSBLas+f3m8az5OlLa7Wst0ejiXYLMmU9kOZC9laOGXx/QXhWOxFZhP1uQN9W607x7tztXoJeGyJNAEB6O6qZ7BDwDaY1SyZliIGfwh+O3l8pbEL5ebcu7pACCwyNj/2Q7E1BB4XhgG25tuiDN36vMJm0zoEHwoTP1V9wMxFk5HDleJkIWHe7AWOmSlDuGFrHKlCqQA7ohjUjkx4FE=) 2025-04-05 12:03:30.546169 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOK1pmRQ/kixmNPDJ6owFE57XWnLK25ldFph1RM+D0gzZEOdSgijE2Azshb8b+sl5Kv9p6iaFWX+CckT+lh+Yuw=) 2025-04-05 12:03:30.547227 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILQhn+kSAelBsE4KT8VEsSknHMFy70RBl+WZZr05ixhe) 2025-04-05 12:03:30.548219 | orchestrator | 2025-04-05 12:03:30.549016 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-05 12:03:30.549628 | orchestrator | Saturday 05 April 2025 12:03:30 +0000 (0:00:01.023) 0:00:07.810 ******** 2025-04-05 12:03:31.543165 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCY5ZbGQgiBnPv1Z648f3LrSkmzQhPY7y5e12/tUnne37/H7OYYqT769tcBWbLlJ3vmvGgyDlSWLGdEDg1q4nrSS4ITrUTzt6ZUh4X8OpvvXGlwj3uScRZ9zOrtWeDTo6KlSuSW1tDTIzJxyOqRAdzRqr90V3MG5dM1xXrlJKKXxmlMyfi/zAcunJuUVMCOtqfAUVbBM8+62MgSgkcQKb0nHI+Vou2naWIU1rrEVRU+S4VquoI5xMWo+xBqL28scQ1n0HqihVZB0XwufUt/OTEwWbKfwbNowA/Iy05BzNUXYbUAsgJsf1ZSlTCjxIH6v+E3gQNArBAInv9vwsjFmmo3UXfipd4mGptt31BCAfy1/NOxEjB+cmKpQMPTM+lX3kGLx7FBt++28VvPZLcRT3krKNwp9kqkSg4FMk225pU3QVzn/Tb+kbEQoaR6C9dgFu7J5dF/BOen7DC6wV0GTyAYtJY/tCtaBT9QNxyzcQ6jom5JfI4tIAn4WwjMW76LHWs=) 2025-04-05 12:03:31.543489 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD7w2vyQHis5vCflZuIAJmVvcAc5aR2BeifGzrgukrUIyQ2JJoBHOQwwUEDNB97gzg8axR412sxd2nkNYOIxst4=) 2025-04-05 12:03:31.545055 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOOzbPvAbAflO9wCopOXrvVIf5L8DhzLHioEHNFWJHuM) 2025-04-05 12:03:31.545735 | orchestrator | 2025-04-05 12:03:31.547075 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-05 12:03:31.548203 | orchestrator | Saturday 05 April 2025 12:03:31 +0000 (0:00:00.997) 0:00:08.807 ******** 2025-04-05 12:03:32.572376 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDNO22BCoVVb8KfWZPu9JVythku4CYq0eVqlfssjOJJwZerPlWMgbW80mc74TrpVQdE2nY7Z9A4rXtO83RoQSYI=) 2025-04-05 12:03:32.573299 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDa3rkLn1VACCyuGTQAiE6Zaaqk6ShA2Ovg4muHZGk7R16jwBey8VIZJq+ZnH1GjfQWpNfrW9BegRh3rmHHeSOTcsGgZwrk9aZuyP3ER3SmRC//T3plP1/Qy/KZolcBeJc9T5ZmoIuvoODjwe4NQRD/TrXmRLHF7r/E+vUrzxSKPJ9E4g89IcOeuByVZbZnMrLwhp5fFEvmhmL+iuP+vZ+x0uJ9qeIWqu6q0JesS6lHkLu0wfQNqfVdtK+DHJZdg+67z41gcM7rGf7L0SBOJWXLkETB6OOkNxt2hfI4sAguhZ6B5DN83hZIrKFkSPcc4w3oD3RLwgvewlEIOR2wcSMnfYbqJbKCOqeYKSRwnKad3eDLRz2xx5qRmtYO++mcJBMbHQ+qp3p9nRAgbk5+danVlaK64ZBf681E51GqxjdfePjm6QAAhAovcUTGSMHR/+AzPv0Hr6qy0Y492yk9Wc6xlJ/UdQ99erThwhD1859VEmK/4VB7MmLGIoOPiTpEyfs=) 2025-04-05 12:03:32.574226 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDT+b06uoBffYNYsOCGeVafvu+9NfYbT+rDY8cgfOUSQ) 2025-04-05 12:03:32.574417 | orchestrator | 2025-04-05 12:03:32.574858 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-05 12:03:32.575488 | orchestrator | Saturday 05 April 2025 12:03:32 +0000 (0:00:01.029) 0:00:09.837 ******** 2025-04-05 12:03:33.574672 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICbt0aWEQgjxOrtY8yVdCnse6ASItK3ZVEwdcX9CTY3u) 2025-04-05 12:03:33.574911 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3+CvmUrVGI+vD/0cuMJ5d6m5joFg7ASLEApm9SqmT7ltFon0VZRSflHQLTR1gOEZu/mZg81N57NjVl2odOEE7DbBs0NERo4FhhjdKSKiUe9ujcQDfAx8gT8i/penc1OgrXUgP8u40Vfpy+QC68ad/E1h3V3ZkBuuT87xmLo+ycQlwW7RGqESeTH9z1fHFaNUuUXCxjBDvevwOpEpOjQxMv0Rk+E/IXDvP0dclpBUmFaR7Bv17eeIednV2zTlB8PpLsmIgPTWjnBfyQJi0RaBAjrMVO+afgOq1Cg58I5XEzhtsisFN/qXIhbJLHtKJu9naSaMZkvrIIUkJRCPQmTEwfFMiCTFxzp4r3HU5qw+kTn/VJ6Vgscf3LUy8j19AeHNWePfpuMfdcw63lzPXsgKSVPGzPxn6XI1PFaMz5eDfIb9dTrdTj1sZc+fWX5yVkC15+4p/MDoXhzUzH/rm7l0oM1Tk2c6B1GCV5v56cjCUFbENpSiC6dgIFoGBb7NWrAM=) 2025-04-05 12:03:33.574959 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG67ks0iE3H954G7N2LewHlQpk6NdXYfBEN5SveFB28+ZSVt551IfDMriHpnj1r9VyLFfDlRjwY9s/hEEP5EVnk=) 2025-04-05 12:03:33.574984 | orchestrator | 2025-04-05 12:03:33.575045 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-05 12:03:33.575328 | orchestrator | Saturday 05 April 2025 12:03:33 +0000 (0:00:01.000) 0:00:10.837 ******** 2025-04-05 12:03:34.582396 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUn0rP36c5r06qlC0fatjeEnrbiBTSwAfCbWF3nyXrqMsZ4kTq4MsOZ5l8C/igpex+4vm07kcuJ04j03O/RiJzsoCkoYqHUPcbwxc+wyEwSI2owYssmYbhqgVakdSqcw5nxs6vPpu6z3m5bj6rprdrQ1+C1GBqVzpp6pQzxP3wQOmH31icN1sGQW7E3hmOEvEP8x5GYicx8Vd6R0xbcrwVH5JcE3lxEKPkI8RuKQ0F+pQQjKngP+xTflJNNE2XsoxowbA9JTk7jgdNauwFcfQ2gcxMxgbNTmJnUi9hVA4ucKQ1pHXM/xYGQuZwRIu2+rhvI49QgryjHihzONXtLfH2u3KcsjktfVgLhCH5bNkJrS0MB0cThAiTQaoXBR9uRNSRxK6ZKFd/+vsVe4Y1RwR7m/P9weyokfxCjir05w6I0t/PnVXcq+AIi3D1egx6yLRDW+2qq7Twb58cLNq4uQ9uPHDcu4fuG5aDEXjiCNFckUvyEMFQmUkYhlJRW3+tt50=) 2025-04-05 12:03:34.583255 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCgNz0ojjtytB9ariOD2ftv1YI4FEsErnAN1D+u1w7c) 2025-04-05 12:03:34.583717 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOLM8xKj23nsPcpcVAWgzyTpzP3SXHZBIqD7LO3DvWMOIvX34asd5Hh7QxtHYBIIOpMLlaXNVqUh74N/uDK6wN0=) 2025-04-05 12:03:34.584114 | orchestrator | 2025-04-05 12:03:34.585191 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-05 12:03:34.585614 | orchestrator | Saturday 05 April 2025 12:03:34 +0000 (0:00:01.008) 0:00:11.846 ******** 2025-04-05 12:03:35.580108 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxPzeRFENm+MO5E/V71hEPCMkuFbrvKdFutEcsjNkoH9NUqmTHS82A2EFVw359GALtR8gazfXrgy8/f7/hj+RiQW4I4KD30bckWUY1iXgwxaUJZMBk+AGxHYbcL007hicN4kNfqEJZSAn73+29nHpuV+72+1vhat3v7SwA8Txeily3Gf0yGrTkI8X0odvWC45IoXQTgMxbF/AU/ybKRxEaoJe58bCc9VYx5Ub37+JmTXSxGO5FTShHT0RIpFoMuEMDmbE1Kang3RGFvpKLTwSfx90adILZDxUnjtHUegGvU1oaG0FWrEKt0yNcvsltXDiOahWavP0Pl6MPT87oTBy8R8oO/FxS1JDfotHvZCIjkSM7nLnuOFr7BLCGioTZu55H57AJC0RImNal19vVnkJtT8bDcMSEd1VJs3Fod2auR/YRt5LZw2AXn9m2EZqzJ3IoddvvLDAxCMouK7ssyyYg9kwI5SFPWmwkHqBRlYMoAztW/F1TkuuAZ9LBNId7qYs=) 2025-04-05 12:03:35.582099 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFuDnAevyYikNNjZJfTwcYMpw3PJTTavQTN8yJxdJ+P7+zZmYROAbPDm/MUC3X7cBOzjZpTlVe8Zk5IxDdhpGos=) 2025-04-05 12:03:40.325529 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAnHLb+rjelie/RZ5PZ5npABqoJhNVvhk/ZAvJc/SlMm) 2025-04-05 12:03:40.325648 | orchestrator | 2025-04-05 12:03:40.325667 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-04-05 12:03:40.325680 | orchestrator | Saturday 05 April 2025 12:03:35 +0000 (0:00:00.999) 0:00:12.845 ******** 2025-04-05 12:03:40.325725 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-04-05 12:03:40.325793 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-04-05 12:03:40.326368 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-04-05 12:03:40.326414 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-04-05 12:03:40.326461 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-04-05 12:03:40.326948 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-04-05 12:03:40.327824 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-04-05 12:03:40.328071 | orchestrator | 2025-04-05 12:03:40.328095 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-04-05 12:03:40.329265 | orchestrator | Saturday 05 April 2025 12:03:40 +0000 (0:00:04.745) 0:00:17.590 ******** 2025-04-05 12:03:40.492285 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-04-05 12:03:40.493146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-04-05 12:03:40.493626 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-04-05 12:03:40.494306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-04-05 12:03:40.494874 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-04-05 12:03:40.495208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-04-05 12:03:40.495745 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-04-05 12:03:40.496215 | orchestrator | 2025-04-05 12:03:40.496658 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-05 12:03:40.497127 | orchestrator | Saturday 05 April 2025 12:03:40 +0000 (0:00:00.167) 0:00:17.758 ******** 2025-04-05 12:03:41.508029 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvBJs3uYLXegz34ZUbQDkU5DN5W/W1uVzXl+X6MMyZX6ceEqmUFGnZ/8/NGfySXrhnJPLvZsvAHPeH2RAT2lOL4V490D/J8InImi6qvwWVjCmBromL02L5YmZH8awiyiZZ8y456EIzBMI4P34FK5jk05hpDbnCb4I/wvBhI27f51YuzX9B2X+2izZxXf2TVu30cqOeuL5uvkTNK9y8AuYUYOWo469NgVs7kL72xss4sEvaKk5Oafl7PW0h3sWCjAtGay2XYiihhysyD0tiYYPfYECtqeXar2nYbPPu9wroZwPAvI80lF1t2ENXQznKW0ZZNWIJrRWxEO4jht2dMA192TsNKapFVVs5PoMChRCEoSIMgro7/3xeAQJvq+exCgvBHrsFdc6WPSgVnDOXUzL01FZgEpi2oxUz39AQ/8Geqe7k2CMSVM9tlc8azwBdZsDXuhX3mc93A2skhGeWIgZSdKWPz05ug9AGx3791Gbbet71FUWzl9LtNuxBu0+Cobs=) 2025-04-05 12:03:41.508240 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCOMq5nMOd4dz6GiRYJZ4KOIvghH7unA3M/yKCxxP1OgescufFuEMthkSibIUNdjbvTln3UbJ7iR+fjn53Dg6is=) 2025-04-05 12:03:41.508859 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKFXhz4ntLvm+kFZlLDGfdrwCLAOEYmu8D96SKKWpSOC) 2025-04-05 12:03:41.509118 | orchestrator | 2025-04-05 12:03:41.510211 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-05 12:03:41.510463 | orchestrator | Saturday 05 April 2025 12:03:41 +0000 (0:00:01.013) 0:00:18.771 ******** 2025-04-05 12:03:42.514189 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMAfDH/u7eg+fQnrEcJ/3HHgGFdQ/ErhOrqbj4hm+4LV9BAGj4kwkYJ4Mrfyj8PJ4fnkD7m74n1UrdOjJVjnogpYGLgqib8/902iMd7AfEcJttRS0mii0XDjFfr3jtUdKVUeQO6u/6dhEcWCRaIHfUspFY7mcftMqdnygLKtG7e8/HgnToqD10PqNHA70pHSBrYqb73XgxR0czwKAyTkJihq593NqNHFahKW00JnPErO48r6p8zKO1CcZv1g0EyYbK/VsPhdBGmgNpagCZ00MQDS829X7BZA0IucCPQbCHumlEogSBLas+f3m8az5OlLa7Wst0ejiXYLMmU9kOZC9laOGXx/QXhWOxFZhP1uQN9W607x7tztXoJeGyJNAEB6O6qZ7BDwDaY1SyZliIGfwh+O3l8pbEL5ebcu7pACCwyNj/2Q7E1BB4XhgG25tuiDN36vMJm0zoEHwoTP1V9wMxFk5HDleJkIWHe7AWOmSlDuGFrHKlCqQA7ohjUjkx4FE=) 2025-04-05 12:03:42.514794 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOK1pmRQ/kixmNPDJ6owFE57XWnLK25ldFph1RM+D0gzZEOdSgijE2Azshb8b+sl5Kv9p6iaFWX+CckT+lh+Yuw=) 2025-04-05 12:03:42.515086 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILQhn+kSAelBsE4KT8VEsSknHMFy70RBl+WZZr05ixhe) 2025-04-05 12:03:42.515825 | orchestrator | 2025-04-05 12:03:42.516373 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-05 12:03:42.517137 | orchestrator | Saturday 05 April 2025 12:03:42 +0000 (0:00:01.006) 0:00:19.777 ******** 2025-04-05 12:03:43.534958 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCY5ZbGQgiBnPv1Z648f3LrSkmzQhPY7y5e12/tUnne37/H7OYYqT769tcBWbLlJ3vmvGgyDlSWLGdEDg1q4nrSS4ITrUTzt6ZUh4X8OpvvXGlwj3uScRZ9zOrtWeDTo6KlSuSW1tDTIzJxyOqRAdzRqr90V3MG5dM1xXrlJKKXxmlMyfi/zAcunJuUVMCOtqfAUVbBM8+62MgSgkcQKb0nHI+Vou2naWIU1rrEVRU+S4VquoI5xMWo+xBqL28scQ1n0HqihVZB0XwufUt/OTEwWbKfwbNowA/Iy05BzNUXYbUAsgJsf1ZSlTCjxIH6v+E3gQNArBAInv9vwsjFmmo3UXfipd4mGptt31BCAfy1/NOxEjB+cmKpQMPTM+lX3kGLx7FBt++28VvPZLcRT3krKNwp9kqkSg4FMk225pU3QVzn/Tb+kbEQoaR6C9dgFu7J5dF/BOen7DC6wV0GTyAYtJY/tCtaBT9QNxyzcQ6jom5JfI4tIAn4WwjMW76LHWs=) 2025-04-05 12:03:43.535129 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD7w2vyQHis5vCflZuIAJmVvcAc5aR2BeifGzrgukrUIyQ2JJoBHOQwwUEDNB97gzg8axR412sxd2nkNYOIxst4=) 2025-04-05 12:03:43.535162 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOOzbPvAbAflO9wCopOXrvVIf5L8DhzLHioEHNFWJHuM) 2025-04-05 12:03:43.535763 | orchestrator | 2025-04-05 12:03:43.536193 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-05 12:03:43.536827 | orchestrator | Saturday 05 April 2025 12:03:43 +0000 (0:00:01.020) 0:00:20.798 ******** 2025-04-05 12:03:44.579363 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDa3rkLn1VACCyuGTQAiE6Zaaqk6ShA2Ovg4muHZGk7R16jwBey8VIZJq+ZnH1GjfQWpNfrW9BegRh3rmHHeSOTcsGgZwrk9aZuyP3ER3SmRC//T3plP1/Qy/KZolcBeJc9T5ZmoIuvoODjwe4NQRD/TrXmRLHF7r/E+vUrzxSKPJ9E4g89IcOeuByVZbZnMrLwhp5fFEvmhmL+iuP+vZ+x0uJ9qeIWqu6q0JesS6lHkLu0wfQNqfVdtK+DHJZdg+67z41gcM7rGf7L0SBOJWXLkETB6OOkNxt2hfI4sAguhZ6B5DN83hZIrKFkSPcc4w3oD3RLwgvewlEIOR2wcSMnfYbqJbKCOqeYKSRwnKad3eDLRz2xx5qRmtYO++mcJBMbHQ+qp3p9nRAgbk5+danVlaK64ZBf681E51GqxjdfePjm6QAAhAovcUTGSMHR/+AzPv0Hr6qy0Y492yk9Wc6xlJ/UdQ99erThwhD1859VEmK/4VB7MmLGIoOPiTpEyfs=) 2025-04-05 12:03:44.579557 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDNO22BCoVVb8KfWZPu9JVythku4CYq0eVqlfssjOJJwZerPlWMgbW80mc74TrpVQdE2nY7Z9A4rXtO83RoQSYI=) 2025-04-05 12:03:44.580703 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDT+b06uoBffYNYsOCGeVafvu+9NfYbT+rDY8cgfOUSQ) 2025-04-05 12:03:44.580768 | orchestrator | 2025-04-05 12:03:44.581166 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-05 12:03:44.581979 | orchestrator | Saturday 05 April 2025 12:03:44 +0000 (0:00:01.043) 0:00:21.842 ******** 2025-04-05 12:03:45.598439 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3+CvmUrVGI+vD/0cuMJ5d6m5joFg7ASLEApm9SqmT7ltFon0VZRSflHQLTR1gOEZu/mZg81N57NjVl2odOEE7DbBs0NERo4FhhjdKSKiUe9ujcQDfAx8gT8i/penc1OgrXUgP8u40Vfpy+QC68ad/E1h3V3ZkBuuT87xmLo+ycQlwW7RGqESeTH9z1fHFaNUuUXCxjBDvevwOpEpOjQxMv0Rk+E/IXDvP0dclpBUmFaR7Bv17eeIednV2zTlB8PpLsmIgPTWjnBfyQJi0RaBAjrMVO+afgOq1Cg58I5XEzhtsisFN/qXIhbJLHtKJu9naSaMZkvrIIUkJRCPQmTEwfFMiCTFxzp4r3HU5qw+kTn/VJ6Vgscf3LUy8j19AeHNWePfpuMfdcw63lzPXsgKSVPGzPxn6XI1PFaMz5eDfIb9dTrdTj1sZc+fWX5yVkC15+4p/MDoXhzUzH/rm7l0oM1Tk2c6B1GCV5v56cjCUFbENpSiC6dgIFoGBb7NWrAM=) 2025-04-05 12:03:45.598615 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG67ks0iE3H954G7N2LewHlQpk6NdXYfBEN5SveFB28+ZSVt551IfDMriHpnj1r9VyLFfDlRjwY9s/hEEP5EVnk=) 2025-04-05 12:03:45.600541 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICbt0aWEQgjxOrtY8yVdCnse6ASItK3ZVEwdcX9CTY3u) 2025-04-05 12:03:45.601125 | orchestrator | 2025-04-05 12:03:45.601819 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-05 12:03:45.602489 | orchestrator | Saturday 05 April 2025 12:03:45 +0000 (0:00:01.021) 0:00:22.863 ******** 2025-04-05 12:03:46.615189 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUn0rP36c5r06qlC0fatjeEnrbiBTSwAfCbWF3nyXrqMsZ4kTq4MsOZ5l8C/igpex+4vm07kcuJ04j03O/RiJzsoCkoYqHUPcbwxc+wyEwSI2owYssmYbhqgVakdSqcw5nxs6vPpu6z3m5bj6rprdrQ1+C1GBqVzpp6pQzxP3wQOmH31icN1sGQW7E3hmOEvEP8x5GYicx8Vd6R0xbcrwVH5JcE3lxEKPkI8RuKQ0F+pQQjKngP+xTflJNNE2XsoxowbA9JTk7jgdNauwFcfQ2gcxMxgbNTmJnUi9hVA4ucKQ1pHXM/xYGQuZwRIu2+rhvI49QgryjHihzONXtLfH2u3KcsjktfVgLhCH5bNkJrS0MB0cThAiTQaoXBR9uRNSRxK6ZKFd/+vsVe4Y1RwR7m/P9weyokfxCjir05w6I0t/PnVXcq+AIi3D1egx6yLRDW+2qq7Twb58cLNq4uQ9uPHDcu4fuG5aDEXjiCNFckUvyEMFQmUkYhlJRW3+tt50=) 2025-04-05 12:03:46.615824 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOLM8xKj23nsPcpcVAWgzyTpzP3SXHZBIqD7LO3DvWMOIvX34asd5Hh7QxtHYBIIOpMLlaXNVqUh74N/uDK6wN0=) 2025-04-05 12:03:46.615864 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCgNz0ojjtytB9ariOD2ftv1YI4FEsErnAN1D+u1w7c) 2025-04-05 12:03:46.617059 | orchestrator | 2025-04-05 12:03:46.618297 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-05 12:03:46.619036 | orchestrator | Saturday 05 April 2025 12:03:46 +0000 (0:00:01.015) 0:00:23.878 ******** 2025-04-05 12:03:47.628590 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFuDnAevyYikNNjZJfTwcYMpw3PJTTavQTN8yJxdJ+P7+zZmYROAbPDm/MUC3X7cBOzjZpTlVe8Zk5IxDdhpGos=) 2025-04-05 12:03:47.628740 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxPzeRFENm+MO5E/V71hEPCMkuFbrvKdFutEcsjNkoH9NUqmTHS82A2EFVw359GALtR8gazfXrgy8/f7/hj+RiQW4I4KD30bckWUY1iXgwxaUJZMBk+AGxHYbcL007hicN4kNfqEJZSAn73+29nHpuV+72+1vhat3v7SwA8Txeily3Gf0yGrTkI8X0odvWC45IoXQTgMxbF/AU/ybKRxEaoJe58bCc9VYx5Ub37+JmTXSxGO5FTShHT0RIpFoMuEMDmbE1Kang3RGFvpKLTwSfx90adILZDxUnjtHUegGvU1oaG0FWrEKt0yNcvsltXDiOahWavP0Pl6MPT87oTBy8R8oO/FxS1JDfotHvZCIjkSM7nLnuOFr7BLCGioTZu55H57AJC0RImNal19vVnkJtT8bDcMSEd1VJs3Fod2auR/YRt5LZw2AXn9m2EZqzJ3IoddvvLDAxCMouK7ssyyYg9kwI5SFPWmwkHqBRlYMoAztW/F1TkuuAZ9LBNId7qYs=) 2025-04-05 12:03:47.629265 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAnHLb+rjelie/RZ5PZ5npABqoJhNVvhk/ZAvJc/SlMm) 2025-04-05 12:03:47.630256 | orchestrator | 2025-04-05 12:03:47.631581 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-04-05 12:03:47.632326 | orchestrator | Saturday 05 April 2025 12:03:47 +0000 (0:00:01.013) 0:00:24.892 ******** 2025-04-05 12:03:47.990552 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-04-05 12:03:47.990738 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-04-05 12:03:47.991429 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-04-05 12:03:47.991462 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-04-05 12:03:47.992238 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-04-05 12:03:47.992606 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-04-05 12:03:47.993034 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-04-05 12:03:47.994087 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:03:47.994420 | orchestrator | 2025-04-05 12:03:47.994443 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-04-05 12:03:47.994462 | orchestrator | Saturday 05 April 2025 12:03:47 +0000 (0:00:00.363) 0:00:25.256 ******** 2025-04-05 12:03:48.048920 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:03:48.049972 | orchestrator | 2025-04-05 12:03:48.050231 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-04-05 12:03:48.051479 | orchestrator | Saturday 05 April 2025 12:03:48 +0000 (0:00:00.059) 0:00:25.315 ******** 2025-04-05 12:03:48.101829 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:03:48.101919 | orchestrator | 2025-04-05 12:03:48.102464 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-04-05 12:03:48.102946 | orchestrator | Saturday 05 April 2025 12:03:48 +0000 (0:00:00.051) 0:00:25.367 ******** 2025-04-05 12:03:48.605253 | orchestrator | changed: [testbed-manager] 2025-04-05 12:03:48.606506 | orchestrator | 2025-04-05 12:03:48.607742 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:03:48.608411 | orchestrator | 2025-04-05 12:03:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:03:48.609184 | orchestrator | 2025-04-05 12:03:48 | INFO  | Please wait and do not abort execution. 2025-04-05 12:03:48.609214 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-05 12:03:48.609810 | orchestrator | 2025-04-05 12:03:48.610651 | orchestrator | 2025-04-05 12:03:48.611027 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:03:48.611949 | orchestrator | Saturday 05 April 2025 12:03:48 +0000 (0:00:00.502) 0:00:25.870 ******** 2025-04-05 12:03:48.612757 | orchestrator | =============================================================================== 2025-04-05 12:03:48.612818 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.33s 2025-04-05 12:03:48.613722 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 4.75s 2025-04-05 12:03:48.614399 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-04-05 12:03:48.614732 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-04-05 12:03:48.615428 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-04-05 12:03:48.616251 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-04-05 12:03:48.617508 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-04-05 12:03:48.618211 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-04-05 12:03:48.619029 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-04-05 12:03:48.619954 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-04-05 12:03:48.620590 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-04-05 12:03:48.621081 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-04-05 12:03:48.621900 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-04-05 12:03:48.622547 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-04-05 12:03:48.623016 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-04-05 12:03:48.623753 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-04-05 12:03:48.624354 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.50s 2025-04-05 12:03:48.624857 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.36s 2025-04-05 12:03:48.625447 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-04-05 12:03:48.626138 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-04-05 12:03:49.016361 | orchestrator | + osism apply squid 2025-04-05 12:03:50.635657 | orchestrator | 2025-04-05 12:03:50 | INFO  | Task 7b69751f-42f4-4a10-bb10-8c393fc07d89 (squid) was prepared for execution. 2025-04-05 12:03:54.462269 | orchestrator | 2025-04-05 12:03:50 | INFO  | It takes a moment until task 7b69751f-42f4-4a10-bb10-8c393fc07d89 (squid) has been started and output is visible here. 2025-04-05 12:03:54.462417 | orchestrator | 2025-04-05 12:03:54.464734 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-04-05 12:03:54.466101 | orchestrator | 2025-04-05 12:03:54.466802 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-04-05 12:03:54.466831 | orchestrator | Saturday 05 April 2025 12:03:54 +0000 (0:00:00.163) 0:00:00.163 ******** 2025-04-05 12:03:54.551136 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-04-05 12:03:54.552287 | orchestrator | 2025-04-05 12:03:54.552315 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-04-05 12:03:54.553474 | orchestrator | Saturday 05 April 2025 12:03:54 +0000 (0:00:00.094) 0:00:00.258 ******** 2025-04-05 12:03:55.811733 | orchestrator | ok: [testbed-manager] 2025-04-05 12:03:55.813089 | orchestrator | 2025-04-05 12:03:55.814236 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-04-05 12:03:55.815104 | orchestrator | Saturday 05 April 2025 12:03:55 +0000 (0:00:01.259) 0:00:01.517 ******** 2025-04-05 12:03:56.931151 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-04-05 12:03:56.932357 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-04-05 12:03:56.933502 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-04-05 12:03:56.934194 | orchestrator | 2025-04-05 12:03:56.935189 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-04-05 12:03:56.936582 | orchestrator | Saturday 05 April 2025 12:03:56 +0000 (0:00:01.120) 0:00:02.637 ******** 2025-04-05 12:03:57.956634 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-04-05 12:03:57.956830 | orchestrator | 2025-04-05 12:03:57.956854 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-04-05 12:03:57.956908 | orchestrator | Saturday 05 April 2025 12:03:57 +0000 (0:00:01.025) 0:00:03.663 ******** 2025-04-05 12:03:58.283961 | orchestrator | ok: [testbed-manager] 2025-04-05 12:03:58.284806 | orchestrator | 2025-04-05 12:03:58.285702 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-04-05 12:03:58.286704 | orchestrator | Saturday 05 April 2025 12:03:58 +0000 (0:00:00.327) 0:00:03.991 ******** 2025-04-05 12:03:59.168172 | orchestrator | changed: [testbed-manager] 2025-04-05 12:03:59.168436 | orchestrator | 2025-04-05 12:03:59.169365 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-04-05 12:03:59.172662 | orchestrator | Saturday 05 April 2025 12:03:59 +0000 (0:00:00.882) 0:00:04.873 ******** 2025-04-05 12:04:30.654243 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-04-05 12:04:30.654691 | orchestrator | ok: [testbed-manager] 2025-04-05 12:04:30.654726 | orchestrator | 2025-04-05 12:04:30.654742 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-04-05 12:04:30.654783 | orchestrator | Saturday 05 April 2025 12:04:30 +0000 (0:00:31.480) 0:00:36.354 ******** 2025-04-05 12:04:42.960399 | orchestrator | changed: [testbed-manager] 2025-04-05 12:04:42.960640 | orchestrator | 2025-04-05 12:04:42.960675 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-04-05 12:04:42.961451 | orchestrator | Saturday 05 April 2025 12:04:42 +0000 (0:00:12.311) 0:00:48.665 ******** 2025-04-05 12:05:43.035315 | orchestrator | Pausing for 60 seconds 2025-04-05 12:05:43.035886 | orchestrator | changed: [testbed-manager] 2025-04-05 12:05:43.035951 | orchestrator | 2025-04-05 12:05:43.035975 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-04-05 12:05:43.036285 | orchestrator | Saturday 05 April 2025 12:05:43 +0000 (0:01:00.074) 0:01:48.740 ******** 2025-04-05 12:05:43.105725 | orchestrator | ok: [testbed-manager] 2025-04-05 12:05:43.106588 | orchestrator | 2025-04-05 12:05:43.107025 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-04-05 12:05:43.107506 | orchestrator | Saturday 05 April 2025 12:05:43 +0000 (0:00:00.073) 0:01:48.813 ******** 2025-04-05 12:05:43.685243 | orchestrator | changed: [testbed-manager] 2025-04-05 12:05:43.685963 | orchestrator | 2025-04-05 12:05:43.686345 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:05:43.686896 | orchestrator | 2025-04-05 12:05:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:05:43.687206 | orchestrator | 2025-04-05 12:05:43 | INFO  | Please wait and do not abort execution. 2025-04-05 12:05:43.688226 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:05:43.689008 | orchestrator | 2025-04-05 12:05:43.689787 | orchestrator | 2025-04-05 12:05:43.690812 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:05:43.691784 | orchestrator | Saturday 05 April 2025 12:05:43 +0000 (0:00:00.579) 0:01:49.393 ******** 2025-04-05 12:05:43.692708 | orchestrator | =============================================================================== 2025-04-05 12:05:43.693200 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-04-05 12:05:43.693615 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.48s 2025-04-05 12:05:43.694272 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.31s 2025-04-05 12:05:43.695020 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.26s 2025-04-05 12:05:43.695402 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.12s 2025-04-05 12:05:43.695943 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.03s 2025-04-05 12:05:43.696335 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.88s 2025-04-05 12:05:43.697079 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.58s 2025-04-05 12:05:43.697290 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.33s 2025-04-05 12:05:43.697798 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-04-05 12:05:43.698443 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-04-05 12:05:44.014004 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-04-05 12:05:44.014576 | orchestrator | ++ semver latest 9.0.0 2025-04-05 12:05:44.050244 | orchestrator | + [[ -1 -lt 0 ]] 2025-04-05 12:05:45.454899 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-04-05 12:05:45.455023 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-04-05 12:05:45.455063 | orchestrator | 2025-04-05 12:05:45 | INFO  | Task ca7918e4-798c-4165-8890-a49d120dd0b6 (operator) was prepared for execution. 2025-04-05 12:05:49.079293 | orchestrator | 2025-04-05 12:05:45 | INFO  | It takes a moment until task ca7918e4-798c-4165-8890-a49d120dd0b6 (operator) has been started and output is visible here. 2025-04-05 12:05:49.079428 | orchestrator | 2025-04-05 12:05:49.079721 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-04-05 12:05:49.083288 | orchestrator | 2025-04-05 12:05:49.084021 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-05 12:05:49.084845 | orchestrator | Saturday 05 April 2025 12:05:49 +0000 (0:00:00.107) 0:00:00.107 ******** 2025-04-05 12:05:52.559727 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:05:52.560392 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:05:52.562280 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:05:52.562624 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:05:52.563654 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:05:52.564288 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:05:52.564904 | orchestrator | 2025-04-05 12:05:52.565710 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-04-05 12:05:52.566447 | orchestrator | Saturday 05 April 2025 12:05:52 +0000 (0:00:03.481) 0:00:03.588 ******** 2025-04-05 12:05:53.402636 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:05:53.403395 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:05:53.406100 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:05:53.406924 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:05:53.406954 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:05:53.407531 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:05:53.408021 | orchestrator | 2025-04-05 12:05:53.408584 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-04-05 12:05:53.409119 | orchestrator | 2025-04-05 12:05:53.409486 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-04-05 12:05:53.409847 | orchestrator | Saturday 05 April 2025 12:05:53 +0000 (0:00:00.841) 0:00:04.430 ******** 2025-04-05 12:05:53.471499 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:05:53.493925 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:05:53.513054 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:05:53.552498 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:05:53.555601 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:05:53.555949 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:05:53.555974 | orchestrator | 2025-04-05 12:05:53.555990 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-04-05 12:05:53.556011 | orchestrator | Saturday 05 April 2025 12:05:53 +0000 (0:00:00.150) 0:00:04.581 ******** 2025-04-05 12:05:53.611975 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:05:53.634726 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:05:53.659041 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:05:53.699251 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:05:53.701334 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:05:53.701446 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:05:53.702127 | orchestrator | 2025-04-05 12:05:53.702939 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-04-05 12:05:53.703733 | orchestrator | Saturday 05 April 2025 12:05:53 +0000 (0:00:00.147) 0:00:04.728 ******** 2025-04-05 12:05:54.250670 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:05:54.251751 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:05:54.252606 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:05:54.253445 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:05:54.254100 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:05:54.254734 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:05:54.255322 | orchestrator | 2025-04-05 12:05:54.256027 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-04-05 12:05:54.256557 | orchestrator | Saturday 05 April 2025 12:05:54 +0000 (0:00:00.550) 0:00:05.279 ******** 2025-04-05 12:05:55.052037 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:05:55.055532 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:05:55.056150 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:05:55.056178 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:05:55.056198 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:05:55.057361 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:05:55.057855 | orchestrator | 2025-04-05 12:05:55.059246 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-04-05 12:05:55.059622 | orchestrator | Saturday 05 April 2025 12:05:55 +0000 (0:00:00.800) 0:00:06.080 ******** 2025-04-05 12:05:56.303499 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-04-05 12:05:56.304757 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-04-05 12:05:56.305914 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-04-05 12:05:56.307126 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-04-05 12:05:56.308167 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-04-05 12:05:56.309263 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-04-05 12:05:56.309810 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-04-05 12:05:56.310792 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-04-05 12:05:56.311844 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-04-05 12:05:56.312773 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-04-05 12:05:56.313456 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-04-05 12:05:56.314402 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-04-05 12:05:56.315296 | orchestrator | 2025-04-05 12:05:56.315879 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-04-05 12:05:56.316306 | orchestrator | Saturday 05 April 2025 12:05:56 +0000 (0:00:01.249) 0:00:07.330 ******** 2025-04-05 12:05:57.408534 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:05:57.409433 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:05:57.409905 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:05:57.410696 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:05:57.411584 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:05:57.412471 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:05:57.413463 | orchestrator | 2025-04-05 12:05:57.414276 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-04-05 12:05:57.414715 | orchestrator | Saturday 05 April 2025 12:05:57 +0000 (0:00:01.106) 0:00:08.436 ******** 2025-04-05 12:05:58.478815 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-04-05 12:05:58.479407 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-04-05 12:05:58.480272 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-04-05 12:05:58.603118 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-04-05 12:05:58.603934 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-04-05 12:05:58.605469 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-04-05 12:05:58.606467 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-04-05 12:05:58.608225 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-04-05 12:05:58.609263 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-04-05 12:05:58.609764 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-04-05 12:05:58.610664 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-04-05 12:05:58.611965 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-04-05 12:05:58.612603 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-04-05 12:05:58.613260 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-04-05 12:05:58.614013 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-04-05 12:05:58.614762 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-04-05 12:05:58.615384 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-04-05 12:05:58.615938 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-04-05 12:05:58.616509 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-04-05 12:05:58.616999 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-04-05 12:05:58.617555 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-04-05 12:05:58.618276 | orchestrator | 2025-04-05 12:05:58.618760 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-04-05 12:05:58.619277 | orchestrator | Saturday 05 April 2025 12:05:58 +0000 (0:00:01.195) 0:00:09.631 ******** 2025-04-05 12:05:59.144675 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:05:59.146468 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:05:59.146958 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:05:59.146995 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:05:59.147651 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:05:59.148486 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:05:59.149206 | orchestrator | 2025-04-05 12:05:59.150064 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-04-05 12:05:59.150614 | orchestrator | Saturday 05 April 2025 12:05:59 +0000 (0:00:00.541) 0:00:10.173 ******** 2025-04-05 12:05:59.221403 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:05:59.246108 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:05:59.264602 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:05:59.304752 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:05:59.305233 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:05:59.305790 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:05:59.306614 | orchestrator | 2025-04-05 12:05:59.307809 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-04-05 12:05:59.308386 | orchestrator | Saturday 05 April 2025 12:05:59 +0000 (0:00:00.161) 0:00:10.334 ******** 2025-04-05 12:06:00.090588 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-04-05 12:06:00.091326 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-05 12:06:00.091365 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:06:00.091420 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:06:00.091489 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-05 12:06:00.091685 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:06:00.092083 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-04-05 12:06:00.092990 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:06:00.093667 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-05 12:06:00.094299 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:06:00.095648 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-05 12:06:00.097312 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:06:00.099689 | orchestrator | 2025-04-05 12:06:00.100395 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-04-05 12:06:00.101157 | orchestrator | Saturday 05 April 2025 12:06:00 +0000 (0:00:00.781) 0:00:11.116 ******** 2025-04-05 12:06:00.159109 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:06:00.179499 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:06:00.198943 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:06:00.225127 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:06:00.225959 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:06:00.226419 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:06:00.226880 | orchestrator | 2025-04-05 12:06:00.228130 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-04-05 12:06:00.228333 | orchestrator | Saturday 05 April 2025 12:06:00 +0000 (0:00:00.139) 0:00:11.255 ******** 2025-04-05 12:06:00.282432 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:06:00.301124 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:06:00.320476 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:06:00.364629 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:06:00.364916 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:06:00.366233 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:06:00.366711 | orchestrator | 2025-04-05 12:06:00.369102 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-04-05 12:06:00.369616 | orchestrator | Saturday 05 April 2025 12:06:00 +0000 (0:00:00.138) 0:00:11.393 ******** 2025-04-05 12:06:00.423914 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:06:00.443843 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:06:00.468672 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:06:00.496920 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:06:00.536574 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:06:00.536713 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:06:00.537109 | orchestrator | 2025-04-05 12:06:00.537478 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-04-05 12:06:00.537842 | orchestrator | Saturday 05 April 2025 12:06:00 +0000 (0:00:00.172) 0:00:11.565 ******** 2025-04-05 12:06:01.210221 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:06:01.211039 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:06:01.212615 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:06:01.213699 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:06:01.215290 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:06:01.215924 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:06:01.215972 | orchestrator | 2025-04-05 12:06:01.216633 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-04-05 12:06:01.218708 | orchestrator | Saturday 05 April 2025 12:06:01 +0000 (0:00:00.672) 0:00:12.238 ******** 2025-04-05 12:06:01.293588 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:06:01.313850 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:06:01.422156 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:06:01.423988 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:06:01.425000 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:06:01.426387 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:06:01.427108 | orchestrator | 2025-04-05 12:06:01.429252 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:06:01.429368 | orchestrator | 2025-04-05 12:06:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:06:01.430488 | orchestrator | 2025-04-05 12:06:01 | INFO  | Please wait and do not abort execution. 2025-04-05 12:06:01.430527 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-05 12:06:01.431192 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-05 12:06:01.432384 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-05 12:06:01.432717 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-05 12:06:01.433828 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-05 12:06:01.434605 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-05 12:06:01.435899 | orchestrator | 2025-04-05 12:06:01.436002 | orchestrator | 2025-04-05 12:06:01.437095 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:06:01.437954 | orchestrator | Saturday 05 April 2025 12:06:01 +0000 (0:00:00.211) 0:00:12.449 ******** 2025-04-05 12:06:01.438544 | orchestrator | =============================================================================== 2025-04-05 12:06:01.439135 | orchestrator | Gathering Facts --------------------------------------------------------- 3.48s 2025-04-05 12:06:01.439936 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.25s 2025-04-05 12:06:01.440456 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.20s 2025-04-05 12:06:01.440837 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.11s 2025-04-05 12:06:01.441319 | orchestrator | Do not require tty for all users ---------------------------------------- 0.84s 2025-04-05 12:06:01.441648 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2025-04-05 12:06:01.442385 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.78s 2025-04-05 12:06:01.442985 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2025-04-05 12:06:01.443478 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.55s 2025-04-05 12:06:01.443694 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.54s 2025-04-05 12:06:01.444273 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2025-04-05 12:06:01.444761 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2025-04-05 12:06:01.445030 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2025-04-05 12:06:01.445628 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.15s 2025-04-05 12:06:01.445852 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2025-04-05 12:06:01.446315 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-04-05 12:06:01.446681 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2025-04-05 12:06:01.833096 | orchestrator | + osism apply --environment custom facts 2025-04-05 12:06:03.318391 | orchestrator | 2025-04-05 12:06:03 | INFO  | Trying to run play facts in environment custom 2025-04-05 12:06:03.382653 | orchestrator | 2025-04-05 12:06:03 | INFO  | Task f4960a59-ee0e-4d52-af32-3336118e2b43 (facts) was prepared for execution. 2025-04-05 12:06:07.046941 | orchestrator | 2025-04-05 12:06:03 | INFO  | It takes a moment until task f4960a59-ee0e-4d52-af32-3336118e2b43 (facts) has been started and output is visible here. 2025-04-05 12:06:07.047020 | orchestrator | 2025-04-05 12:06:07.048015 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-04-05 12:06:07.048115 | orchestrator | 2025-04-05 12:06:07.048488 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-05 12:06:07.049353 | orchestrator | Saturday 05 April 2025 12:06:07 +0000 (0:00:00.088) 0:00:00.088 ******** 2025-04-05 12:06:08.373920 | orchestrator | ok: [testbed-manager] 2025-04-05 12:06:08.374600 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:06:08.375617 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:06:08.377574 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:06:08.378058 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:06:08.379194 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:06:08.380750 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:06:08.382649 | orchestrator | 2025-04-05 12:06:08.384006 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-04-05 12:06:08.384276 | orchestrator | Saturday 05 April 2025 12:06:08 +0000 (0:00:01.326) 0:00:01.414 ******** 2025-04-05 12:06:09.588717 | orchestrator | ok: [testbed-manager] 2025-04-05 12:06:09.589576 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:06:09.589688 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:06:09.591939 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:06:09.592948 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:06:09.594219 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:06:09.595469 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:06:09.596873 | orchestrator | 2025-04-05 12:06:09.597727 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-04-05 12:06:09.598358 | orchestrator | 2025-04-05 12:06:09.599114 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-05 12:06:09.600081 | orchestrator | Saturday 05 April 2025 12:06:09 +0000 (0:00:01.215) 0:00:02.629 ******** 2025-04-05 12:06:09.716702 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:06:09.718100 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:06:09.719280 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:06:09.720136 | orchestrator | 2025-04-05 12:06:09.721003 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-05 12:06:09.721971 | orchestrator | Saturday 05 April 2025 12:06:09 +0000 (0:00:00.130) 0:00:02.759 ******** 2025-04-05 12:06:09.907969 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:06:09.908131 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:06:09.908622 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:06:09.908673 | orchestrator | 2025-04-05 12:06:09.908758 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-05 12:06:09.911037 | orchestrator | Saturday 05 April 2025 12:06:09 +0000 (0:00:00.191) 0:00:02.951 ******** 2025-04-05 12:06:10.101978 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:06:10.102194 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:06:10.102221 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:06:10.102510 | orchestrator | 2025-04-05 12:06:10.102546 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-05 12:06:10.102728 | orchestrator | Saturday 05 April 2025 12:06:10 +0000 (0:00:00.192) 0:00:03.144 ******** 2025-04-05 12:06:10.253768 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:06:10.257419 | orchestrator | 2025-04-05 12:06:10.258906 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-05 12:06:10.260011 | orchestrator | Saturday 05 April 2025 12:06:10 +0000 (0:00:00.151) 0:00:03.295 ******** 2025-04-05 12:06:10.699750 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:06:10.701159 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:06:10.701979 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:06:10.702946 | orchestrator | 2025-04-05 12:06:10.703995 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-05 12:06:10.705213 | orchestrator | Saturday 05 April 2025 12:06:10 +0000 (0:00:00.446) 0:00:03.742 ******** 2025-04-05 12:06:10.800210 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:06:10.800339 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:06:10.800841 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:06:10.801009 | orchestrator | 2025-04-05 12:06:10.801037 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-05 12:06:10.801105 | orchestrator | Saturday 05 April 2025 12:06:10 +0000 (0:00:00.101) 0:00:03.844 ******** 2025-04-05 12:06:11.975103 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:06:11.975893 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:06:11.976534 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:06:11.976892 | orchestrator | 2025-04-05 12:06:11.977778 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-05 12:06:11.978685 | orchestrator | Saturday 05 April 2025 12:06:11 +0000 (0:00:01.174) 0:00:05.018 ******** 2025-04-05 12:06:12.552682 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:06:12.553699 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:06:12.554070 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:06:12.554787 | orchestrator | 2025-04-05 12:06:12.555498 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-05 12:06:12.556371 | orchestrator | Saturday 05 April 2025 12:06:12 +0000 (0:00:00.575) 0:00:05.594 ******** 2025-04-05 12:06:13.779394 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:06:13.780544 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:06:13.780593 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:06:13.781452 | orchestrator | 2025-04-05 12:06:13.782185 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-05 12:06:13.783027 | orchestrator | Saturday 05 April 2025 12:06:13 +0000 (0:00:01.225) 0:00:06.819 ******** 2025-04-05 12:06:28.217991 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:06:28.218251 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:06:28.219438 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:06:28.220225 | orchestrator | 2025-04-05 12:06:28.221709 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-04-05 12:06:28.222741 | orchestrator | Saturday 05 April 2025 12:06:28 +0000 (0:00:14.433) 0:00:21.252 ******** 2025-04-05 12:06:28.279576 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:06:28.317063 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:06:28.317313 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:06:28.318121 | orchestrator | 2025-04-05 12:06:28.318643 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-04-05 12:06:28.319195 | orchestrator | Saturday 05 April 2025 12:06:28 +0000 (0:00:00.107) 0:00:21.360 ******** 2025-04-05 12:06:36.809533 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:06:36.810406 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:06:36.810460 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:06:36.811681 | orchestrator | 2025-04-05 12:06:36.813018 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-05 12:06:36.814467 | orchestrator | Saturday 05 April 2025 12:06:36 +0000 (0:00:08.490) 0:00:29.850 ******** 2025-04-05 12:06:37.246594 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:06:37.248040 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:06:37.248800 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:06:37.249394 | orchestrator | 2025-04-05 12:06:37.250331 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-04-05 12:06:37.251182 | orchestrator | Saturday 05 April 2025 12:06:37 +0000 (0:00:00.438) 0:00:30.289 ******** 2025-04-05 12:06:41.059509 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-04-05 12:06:41.059940 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-04-05 12:06:41.059977 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-04-05 12:06:41.059998 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-04-05 12:06:41.061239 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-04-05 12:06:41.062109 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-04-05 12:06:41.062647 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-04-05 12:06:41.063776 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-04-05 12:06:41.064251 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-04-05 12:06:41.064985 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-04-05 12:06:41.065642 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-04-05 12:06:41.065993 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-04-05 12:06:41.066885 | orchestrator | 2025-04-05 12:06:41.067188 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-05 12:06:41.067683 | orchestrator | Saturday 05 April 2025 12:06:41 +0000 (0:00:03.809) 0:00:34.098 ******** 2025-04-05 12:06:42.554003 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:06:42.557472 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:06:42.557517 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:06:42.557578 | orchestrator | 2025-04-05 12:06:45.990339 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-05 12:06:45.990456 | orchestrator | 2025-04-05 12:06:45.990474 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-05 12:06:45.990489 | orchestrator | Saturday 05 April 2025 12:06:42 +0000 (0:00:01.496) 0:00:35.595 ******** 2025-04-05 12:06:45.990518 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:06:45.990765 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:06:45.993966 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:06:45.999578 | orchestrator | ok: [testbed-manager] 2025-04-05 12:06:45.999930 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:06:45.999962 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:06:46.002237 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:06:46.002948 | orchestrator | 2025-04-05 12:06:46.003452 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:06:46.003779 | orchestrator | 2025-04-05 12:06:46 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:06:46.003879 | orchestrator | 2025-04-05 12:06:46 | INFO  | Please wait and do not abort execution. 2025-04-05 12:06:46.004542 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:06:46.004899 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:06:46.005416 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:06:46.005827 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:06:46.006315 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:06:46.006820 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:06:46.007314 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:06:46.007890 | orchestrator | 2025-04-05 12:06:46.008316 | orchestrator | 2025-04-05 12:06:46.008997 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:06:46.009339 | orchestrator | Saturday 05 April 2025 12:06:45 +0000 (0:00:03.437) 0:00:39.033 ******** 2025-04-05 12:06:46.010095 | orchestrator | =============================================================================== 2025-04-05 12:06:46.010832 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.43s 2025-04-05 12:06:46.011890 | orchestrator | Install required packages (Debian) -------------------------------------- 8.49s 2025-04-05 12:06:46.013070 | orchestrator | Copy fact files --------------------------------------------------------- 3.81s 2025-04-05 12:06:46.014302 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.44s 2025-04-05 12:06:46.014905 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.50s 2025-04-05 12:06:46.015609 | orchestrator | Create custom facts directory ------------------------------------------- 1.33s 2025-04-05 12:06:46.016196 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.23s 2025-04-05 12:06:46.016552 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2025-04-05 12:06:46.017041 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.17s 2025-04-05 12:06:46.017589 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.58s 2025-04-05 12:06:46.017990 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2025-04-05 12:06:46.018640 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2025-04-05 12:06:46.019506 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2025-04-05 12:06:46.020043 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2025-04-05 12:06:46.020479 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-04-05 12:06:46.020520 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.13s 2025-04-05 12:06:46.020686 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-04-05 12:06:46.021163 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2025-04-05 12:06:46.391937 | orchestrator | + osism apply bootstrap 2025-04-05 12:06:47.949287 | orchestrator | 2025-04-05 12:06:47 | INFO  | Task 04723d1b-b508-4277-a76b-c79fac7118e5 (bootstrap) was prepared for execution. 2025-04-05 12:06:47.950527 | orchestrator | 2025-04-05 12:06:47 | INFO  | It takes a moment until task 04723d1b-b508-4277-a76b-c79fac7118e5 (bootstrap) has been started and output is visible here. 2025-04-05 12:06:51.557582 | orchestrator | 2025-04-05 12:06:51.558363 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-04-05 12:06:51.560436 | orchestrator | 2025-04-05 12:06:51.561775 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-04-05 12:06:51.562577 | orchestrator | Saturday 05 April 2025 12:06:51 +0000 (0:00:00.123) 0:00:00.123 ******** 2025-04-05 12:06:51.628916 | orchestrator | ok: [testbed-manager] 2025-04-05 12:06:51.646656 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:06:51.668810 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:06:51.724661 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:06:51.724818 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:06:51.724842 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:06:51.724886 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:06:51.724907 | orchestrator | 2025-04-05 12:06:51.726068 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-05 12:06:51.726916 | orchestrator | 2025-04-05 12:06:51.727593 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-05 12:06:51.728218 | orchestrator | Saturday 05 April 2025 12:06:51 +0000 (0:00:00.166) 0:00:00.290 ******** 2025-04-05 12:06:54.946486 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:06:54.947237 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:06:54.947296 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:06:54.948843 | orchestrator | ok: [testbed-manager] 2025-04-05 12:06:54.949197 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:06:54.949215 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:06:54.949239 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:06:54.949263 | orchestrator | 2025-04-05 12:06:54.949582 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-04-05 12:06:54.950056 | orchestrator | 2025-04-05 12:06:54.950470 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-05 12:06:54.950902 | orchestrator | Saturday 05 April 2025 12:06:54 +0000 (0:00:03.223) 0:00:03.514 ******** 2025-04-05 12:06:55.022950 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-04-05 12:06:55.053955 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-04-05 12:06:55.094985 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-04-05 12:06:55.095025 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-05 12:06:55.095055 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-05 12:06:55.095072 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-04-05 12:06:55.095096 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-04-05 12:06:55.095194 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-05 12:06:55.095219 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-04-05 12:06:55.095410 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-05 12:06:55.095774 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-04-05 12:06:55.311915 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-05 12:06:55.312396 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-05 12:06:55.312427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-04-05 12:06:55.312687 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-04-05 12:06:55.313300 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-05 12:06:55.315615 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-05 12:06:55.315904 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:06:55.315932 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-05 12:06:55.315946 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-05 12:06:55.315961 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-05 12:06:55.315975 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-04-05 12:06:55.315994 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-04-05 12:06:55.316480 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-05 12:06:55.317016 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-05 12:06:55.317715 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-04-05 12:06:55.318251 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-05 12:06:55.318729 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:06:55.319293 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-05 12:06:55.319756 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-04-05 12:06:55.321553 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-05 12:06:55.321881 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-05 12:06:55.322058 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-05 12:06:55.322453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-05 12:06:55.322683 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-05 12:06:55.324815 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-05 12:06:55.324918 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:06:55.324938 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-05 12:06:55.324952 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:06:55.324967 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-05 12:06:55.324981 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-05 12:06:55.324999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:06:55.325083 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-05 12:06:55.325473 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-05 12:06:55.325612 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-05 12:06:55.325847 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:06:55.326197 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-05 12:06:55.326433 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:06:55.326649 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:06:55.326885 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-05 12:06:55.327135 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-05 12:06:55.327340 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-05 12:06:55.327559 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-05 12:06:55.327791 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:06:55.328024 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-05 12:06:55.328265 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:06:55.328493 | orchestrator | 2025-04-05 12:06:55.328718 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-04-05 12:06:55.329105 | orchestrator | 2025-04-05 12:06:55.329229 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-04-05 12:06:55.329478 | orchestrator | Saturday 05 April 2025 12:06:55 +0000 (0:00:00.367) 0:00:03.881 ******** 2025-04-05 12:06:56.386977 | orchestrator | ok: [testbed-manager] 2025-04-05 12:06:56.389228 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:06:56.389536 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:06:56.389559 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:06:56.389571 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:06:56.389588 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:06:56.390456 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:06:56.390917 | orchestrator | 2025-04-05 12:06:56.391424 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-04-05 12:06:56.391909 | orchestrator | Saturday 05 April 2025 12:06:56 +0000 (0:00:01.073) 0:00:04.954 ******** 2025-04-05 12:06:57.404663 | orchestrator | ok: [testbed-manager] 2025-04-05 12:06:57.405268 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:06:57.405303 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:06:57.406369 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:06:57.406827 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:06:57.407628 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:06:57.408484 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:06:57.409109 | orchestrator | 2025-04-05 12:06:57.409656 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-04-05 12:06:57.410300 | orchestrator | Saturday 05 April 2025 12:06:57 +0000 (0:00:01.015) 0:00:05.970 ******** 2025-04-05 12:06:57.625808 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:06:57.626661 | orchestrator | 2025-04-05 12:06:57.627518 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-04-05 12:06:57.628351 | orchestrator | Saturday 05 April 2025 12:06:57 +0000 (0:00:00.223) 0:00:06.193 ******** 2025-04-05 12:06:59.565143 | orchestrator | changed: [testbed-manager] 2025-04-05 12:06:59.566304 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:06:59.566440 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:06:59.566687 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:06:59.567136 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:06:59.567980 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:06:59.568943 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:06:59.569198 | orchestrator | 2025-04-05 12:06:59.569834 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-04-05 12:06:59.570240 | orchestrator | Saturday 05 April 2025 12:06:59 +0000 (0:00:01.935) 0:00:08.129 ******** 2025-04-05 12:06:59.618657 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:06:59.764679 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:06:59.764767 | orchestrator | 2025-04-05 12:06:59.764791 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-04-05 12:06:59.765329 | orchestrator | Saturday 05 April 2025 12:06:59 +0000 (0:00:00.203) 0:00:08.332 ******** 2025-04-05 12:07:00.931369 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:07:00.932425 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:07:00.933119 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:07:00.934456 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:07:00.935929 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:07:00.936929 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:07:00.937642 | orchestrator | 2025-04-05 12:07:00.939190 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-04-05 12:07:00.939787 | orchestrator | Saturday 05 April 2025 12:07:00 +0000 (0:00:01.164) 0:00:09.497 ******** 2025-04-05 12:07:01.001912 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:07:01.661826 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:07:01.662223 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:07:01.662953 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:07:01.662984 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:07:01.663008 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:07:01.663389 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:07:01.664040 | orchestrator | 2025-04-05 12:07:01.664606 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-04-05 12:07:01.664915 | orchestrator | Saturday 05 April 2025 12:07:01 +0000 (0:00:00.730) 0:00:10.227 ******** 2025-04-05 12:07:01.758704 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:07:01.786780 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:07:01.825598 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:07:02.102461 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:07:02.103121 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:07:02.104267 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:07:02.106070 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:02.106879 | orchestrator | 2025-04-05 12:07:02.108097 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-04-05 12:07:02.109323 | orchestrator | Saturday 05 April 2025 12:07:02 +0000 (0:00:00.439) 0:00:10.667 ******** 2025-04-05 12:07:02.178419 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:07:02.213688 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:07:02.228638 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:07:02.253118 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:07:02.305189 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:07:02.309252 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:07:02.310491 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:07:02.310515 | orchestrator | 2025-04-05 12:07:02.310535 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-04-05 12:07:02.311418 | orchestrator | Saturday 05 April 2025 12:07:02 +0000 (0:00:00.205) 0:00:10.872 ******** 2025-04-05 12:07:02.573329 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:07:02.573778 | orchestrator | 2025-04-05 12:07:02.574115 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-04-05 12:07:02.574247 | orchestrator | Saturday 05 April 2025 12:07:02 +0000 (0:00:00.268) 0:00:11.141 ******** 2025-04-05 12:07:02.855252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:07:02.856024 | orchestrator | 2025-04-05 12:07:02.857090 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-04-05 12:07:02.857950 | orchestrator | Saturday 05 April 2025 12:07:02 +0000 (0:00:00.280) 0:00:11.421 ******** 2025-04-05 12:07:04.211845 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:04.212307 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:04.212343 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:04.212366 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:04.213194 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:04.214443 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:04.215409 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:04.216317 | orchestrator | 2025-04-05 12:07:04.217506 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-04-05 12:07:04.218544 | orchestrator | Saturday 05 April 2025 12:07:04 +0000 (0:00:01.354) 0:00:12.776 ******** 2025-04-05 12:07:04.284959 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:07:04.307977 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:07:04.331012 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:07:04.352570 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:07:04.485407 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:07:04.486420 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:07:04.489567 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:07:04.490115 | orchestrator | 2025-04-05 12:07:04.490150 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-04-05 12:07:04.490764 | orchestrator | Saturday 05 April 2025 12:07:04 +0000 (0:00:00.276) 0:00:13.052 ******** 2025-04-05 12:07:05.103576 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:05.104054 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:05.104604 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:05.104926 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:05.106172 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:05.107198 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:05.107528 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:05.107935 | orchestrator | 2025-04-05 12:07:05.108535 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-04-05 12:07:05.109412 | orchestrator | Saturday 05 April 2025 12:07:05 +0000 (0:00:00.615) 0:00:13.668 ******** 2025-04-05 12:07:05.179655 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:07:05.207229 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:07:05.227166 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:07:05.257588 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:07:05.319747 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:07:05.320709 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:07:05.321732 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:07:05.322153 | orchestrator | 2025-04-05 12:07:05.322986 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-04-05 12:07:05.323628 | orchestrator | Saturday 05 April 2025 12:07:05 +0000 (0:00:00.219) 0:00:13.887 ******** 2025-04-05 12:07:05.889168 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:05.889620 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:07:05.890962 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:07:05.892194 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:07:05.893797 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:07:05.894822 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:07:05.895985 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:07:05.896802 | orchestrator | 2025-04-05 12:07:05.897491 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-04-05 12:07:05.898395 | orchestrator | Saturday 05 April 2025 12:07:05 +0000 (0:00:00.567) 0:00:14.455 ******** 2025-04-05 12:07:06.971842 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:06.973352 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:07:06.974381 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:07:06.975328 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:07:06.976816 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:07:06.979667 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:07:06.981315 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:07:06.982494 | orchestrator | 2025-04-05 12:07:06.983352 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-04-05 12:07:06.984408 | orchestrator | Saturday 05 April 2025 12:07:06 +0000 (0:00:01.079) 0:00:15.535 ******** 2025-04-05 12:07:08.042185 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:08.042917 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:08.043705 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:08.044817 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:08.045794 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:08.046582 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:08.050523 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:08.053159 | orchestrator | 2025-04-05 12:07:08.053663 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-04-05 12:07:08.054497 | orchestrator | Saturday 05 April 2025 12:07:08 +0000 (0:00:01.072) 0:00:16.607 ******** 2025-04-05 12:07:08.359066 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:07:08.359972 | orchestrator | 2025-04-05 12:07:08.360828 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-04-05 12:07:08.361865 | orchestrator | Saturday 05 April 2025 12:07:08 +0000 (0:00:00.317) 0:00:16.925 ******** 2025-04-05 12:07:08.434048 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:07:09.570460 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:07:09.571763 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:07:09.573041 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:07:09.573950 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:07:09.576174 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:07:09.576279 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:07:09.577078 | orchestrator | 2025-04-05 12:07:09.578260 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-05 12:07:09.578979 | orchestrator | Saturday 05 April 2025 12:07:09 +0000 (0:00:01.210) 0:00:18.136 ******** 2025-04-05 12:07:09.639575 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:09.665398 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:09.687514 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:09.712419 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:09.765319 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:09.765945 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:09.766827 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:09.767821 | orchestrator | 2025-04-05 12:07:09.768469 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-05 12:07:09.769142 | orchestrator | Saturday 05 April 2025 12:07:09 +0000 (0:00:00.196) 0:00:18.332 ******** 2025-04-05 12:07:09.869001 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:09.890061 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:09.913066 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:09.974463 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:09.974802 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:09.976301 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:09.976840 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:09.977934 | orchestrator | 2025-04-05 12:07:09.978842 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-05 12:07:09.980399 | orchestrator | Saturday 05 April 2025 12:07:09 +0000 (0:00:00.209) 0:00:18.542 ******** 2025-04-05 12:07:10.045729 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:10.074755 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:10.096426 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:10.122934 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:10.172354 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:10.173976 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:10.174979 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:10.175610 | orchestrator | 2025-04-05 12:07:10.176487 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-05 12:07:10.177277 | orchestrator | Saturday 05 April 2025 12:07:10 +0000 (0:00:00.197) 0:00:18.740 ******** 2025-04-05 12:07:10.437387 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:07:10.439453 | orchestrator | 2025-04-05 12:07:10.439489 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-05 12:07:10.440257 | orchestrator | Saturday 05 April 2025 12:07:10 +0000 (0:00:00.264) 0:00:19.004 ******** 2025-04-05 12:07:10.927899 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:10.928348 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:10.929269 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:10.930162 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:10.931022 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:10.931697 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:10.932549 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:10.933242 | orchestrator | 2025-04-05 12:07:10.933737 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-05 12:07:10.934636 | orchestrator | Saturday 05 April 2025 12:07:10 +0000 (0:00:00.489) 0:00:19.493 ******** 2025-04-05 12:07:11.022827 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:07:11.042117 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:07:11.063333 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:07:11.120722 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:07:11.120965 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:07:11.121825 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:07:11.122897 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:07:11.123761 | orchestrator | 2025-04-05 12:07:11.124590 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-05 12:07:11.125487 | orchestrator | Saturday 05 April 2025 12:07:11 +0000 (0:00:00.195) 0:00:19.688 ******** 2025-04-05 12:07:12.079693 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:12.079925 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:07:12.081365 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:07:12.081759 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:12.081805 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:07:12.082285 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:12.082756 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:12.083179 | orchestrator | 2025-04-05 12:07:12.083749 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-05 12:07:12.084042 | orchestrator | Saturday 05 April 2025 12:07:12 +0000 (0:00:00.956) 0:00:20.645 ******** 2025-04-05 12:07:12.684091 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:12.684376 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:12.685352 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:12.686826 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:12.687911 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:12.687950 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:12.688893 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:12.689569 | orchestrator | 2025-04-05 12:07:12.690335 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-05 12:07:12.691015 | orchestrator | Saturday 05 April 2025 12:07:12 +0000 (0:00:00.605) 0:00:21.250 ******** 2025-04-05 12:07:13.894834 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:13.895832 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:07:13.896513 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:13.898369 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:07:13.900610 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:07:13.901426 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:13.902121 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:13.902893 | orchestrator | 2025-04-05 12:07:13.903645 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-05 12:07:13.903976 | orchestrator | Saturday 05 April 2025 12:07:13 +0000 (0:00:01.209) 0:00:22.460 ******** 2025-04-05 12:07:28.772481 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:28.773621 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:28.773659 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:28.773683 | orchestrator | changed: [testbed-manager] 2025-04-05 12:07:28.773998 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:07:28.774556 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:07:28.775092 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:07:28.776039 | orchestrator | 2025-04-05 12:07:28.776820 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-04-05 12:07:28.777256 | orchestrator | Saturday 05 April 2025 12:07:28 +0000 (0:00:14.873) 0:00:37.333 ******** 2025-04-05 12:07:28.841943 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:28.864838 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:28.889619 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:28.913439 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:28.973558 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:28.973713 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:28.974556 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:28.979259 | orchestrator | 2025-04-05 12:07:28.979718 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-04-05 12:07:28.980259 | orchestrator | Saturday 05 April 2025 12:07:28 +0000 (0:00:00.207) 0:00:37.541 ******** 2025-04-05 12:07:29.046964 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:29.071547 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:29.096402 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:29.119887 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:29.183732 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:29.183913 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:29.184839 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:29.185370 | orchestrator | 2025-04-05 12:07:29.186109 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-04-05 12:07:29.186435 | orchestrator | Saturday 05 April 2025 12:07:29 +0000 (0:00:00.210) 0:00:37.751 ******** 2025-04-05 12:07:29.264467 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:29.285141 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:29.310709 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:29.334239 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:29.403004 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:29.403369 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:29.403403 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:29.403544 | orchestrator | 2025-04-05 12:07:29.403767 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-04-05 12:07:29.403888 | orchestrator | Saturday 05 April 2025 12:07:29 +0000 (0:00:00.218) 0:00:37.970 ******** 2025-04-05 12:07:29.684591 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:07:29.685435 | orchestrator | 2025-04-05 12:07:29.685470 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-04-05 12:07:29.687414 | orchestrator | Saturday 05 April 2025 12:07:29 +0000 (0:00:00.280) 0:00:38.251 ******** 2025-04-05 12:07:31.565557 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:31.565961 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:31.567991 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:31.568404 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:31.569124 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:31.569560 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:31.570224 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:31.570760 | orchestrator | 2025-04-05 12:07:31.571342 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-04-05 12:07:31.572079 | orchestrator | Saturday 05 April 2025 12:07:31 +0000 (0:00:01.879) 0:00:40.130 ******** 2025-04-05 12:07:32.718629 | orchestrator | changed: [testbed-manager] 2025-04-05 12:07:32.719534 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:07:32.720319 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:07:32.722888 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:07:32.723753 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:07:32.724447 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:07:32.724869 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:07:32.725540 | orchestrator | 2025-04-05 12:07:32.725931 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-04-05 12:07:32.726588 | orchestrator | Saturday 05 April 2025 12:07:32 +0000 (0:00:01.153) 0:00:41.284 ******** 2025-04-05 12:07:33.480793 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:33.480969 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:33.480996 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:33.482310 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:33.482399 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:33.482455 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:33.482472 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:33.482487 | orchestrator | 2025-04-05 12:07:33.482505 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-04-05 12:07:33.482589 | orchestrator | Saturday 05 April 2025 12:07:33 +0000 (0:00:00.763) 0:00:42.048 ******** 2025-04-05 12:07:33.791430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:07:33.792435 | orchestrator | 2025-04-05 12:07:33.792984 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-04-05 12:07:34.768593 | orchestrator | Saturday 05 April 2025 12:07:33 +0000 (0:00:00.309) 0:00:42.357 ******** 2025-04-05 12:07:34.768723 | orchestrator | changed: [testbed-manager] 2025-04-05 12:07:34.842704 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:07:34.842761 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:07:34.842776 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:07:34.842790 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:07:34.842804 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:07:34.842818 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:07:34.842833 | orchestrator | 2025-04-05 12:07:34.842888 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-04-05 12:07:34.842933 | orchestrator | Saturday 05 April 2025 12:07:34 +0000 (0:00:00.972) 0:00:43.329 ******** 2025-04-05 12:07:34.842958 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:07:34.866326 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:07:34.890459 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:07:34.911466 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:07:35.063402 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:07:35.064652 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:07:35.064930 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:07:35.068485 | orchestrator | 2025-04-05 12:07:45.357987 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-04-05 12:07:45.358897 | orchestrator | Saturday 05 April 2025 12:07:35 +0000 (0:00:00.300) 0:00:43.630 ******** 2025-04-05 12:07:45.359002 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:07:45.359082 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:07:45.359100 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:07:45.359117 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:07:45.359132 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:07:45.359154 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:07:45.359909 | orchestrator | changed: [testbed-manager] 2025-04-05 12:07:45.359980 | orchestrator | 2025-04-05 12:07:45.360683 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-04-05 12:07:45.360916 | orchestrator | Saturday 05 April 2025 12:07:45 +0000 (0:00:10.290) 0:00:53.920 ******** 2025-04-05 12:07:46.652063 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:46.653237 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:46.654115 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:46.654535 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:46.655480 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:46.655956 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:46.656894 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:46.657494 | orchestrator | 2025-04-05 12:07:46.658229 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-04-05 12:07:46.658635 | orchestrator | Saturday 05 April 2025 12:07:46 +0000 (0:00:01.297) 0:00:55.218 ******** 2025-04-05 12:07:47.499516 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:47.502760 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:47.503504 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:47.503998 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:47.504695 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:47.505587 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:47.506276 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:47.506982 | orchestrator | 2025-04-05 12:07:47.507806 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-04-05 12:07:47.508213 | orchestrator | Saturday 05 April 2025 12:07:47 +0000 (0:00:00.847) 0:00:56.065 ******** 2025-04-05 12:07:47.575920 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:47.600729 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:47.622470 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:47.651997 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:47.711768 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:47.712411 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:47.713475 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:47.714661 | orchestrator | 2025-04-05 12:07:47.714701 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-04-05 12:07:47.714914 | orchestrator | Saturday 05 April 2025 12:07:47 +0000 (0:00:00.213) 0:00:56.279 ******** 2025-04-05 12:07:47.789325 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:47.813052 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:47.835272 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:47.855616 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:47.909937 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:47.910242 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:47.911117 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:47.911884 | orchestrator | 2025-04-05 12:07:48.177220 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-04-05 12:07:48.177329 | orchestrator | Saturday 05 April 2025 12:07:47 +0000 (0:00:00.198) 0:00:56.477 ******** 2025-04-05 12:07:48.177361 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:07:48.179897 | orchestrator | 2025-04-05 12:07:48.180630 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-04-05 12:07:48.181832 | orchestrator | Saturday 05 April 2025 12:07:48 +0000 (0:00:00.266) 0:00:56.744 ******** 2025-04-05 12:07:49.624576 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:49.624725 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:49.625221 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:49.625600 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:49.626109 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:49.626659 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:49.626961 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:49.627182 | orchestrator | 2025-04-05 12:07:49.628164 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-04-05 12:07:49.628834 | orchestrator | Saturday 05 April 2025 12:07:49 +0000 (0:00:01.443) 0:00:58.187 ******** 2025-04-05 12:07:50.161323 | orchestrator | changed: [testbed-manager] 2025-04-05 12:07:50.163094 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:07:50.163197 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:07:50.163224 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:07:50.163605 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:07:50.164526 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:07:50.164734 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:07:50.165485 | orchestrator | 2025-04-05 12:07:50.166523 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-04-05 12:07:50.167276 | orchestrator | Saturday 05 April 2025 12:07:50 +0000 (0:00:00.540) 0:00:58.728 ******** 2025-04-05 12:07:50.232447 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:50.256751 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:50.281902 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:50.304142 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:50.363190 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:50.363991 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:50.364575 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:50.365437 | orchestrator | 2025-04-05 12:07:50.366309 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-04-05 12:07:50.367193 | orchestrator | Saturday 05 April 2025 12:07:50 +0000 (0:00:00.203) 0:00:58.931 ******** 2025-04-05 12:07:51.841302 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:51.842554 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:51.844025 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:51.844452 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:51.845830 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:51.846879 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:51.848126 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:51.849050 | orchestrator | 2025-04-05 12:07:51.849716 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-04-05 12:07:51.850529 | orchestrator | Saturday 05 April 2025 12:07:51 +0000 (0:00:01.476) 0:01:00.408 ******** 2025-04-05 12:07:54.055200 | orchestrator | changed: [testbed-manager] 2025-04-05 12:07:54.055565 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:07:54.056000 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:07:54.059195 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:07:54.059568 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:07:54.060070 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:07:54.060463 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:07:54.062685 | orchestrator | 2025-04-05 12:07:54.063117 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-04-05 12:07:54.063532 | orchestrator | Saturday 05 April 2025 12:07:54 +0000 (0:00:02.211) 0:01:02.620 ******** 2025-04-05 12:07:56.343128 | orchestrator | ok: [testbed-manager] 2025-04-05 12:07:56.343809 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:07:56.344980 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:07:56.346775 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:07:56.347361 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:07:56.348099 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:07:56.348817 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:07:56.349641 | orchestrator | 2025-04-05 12:07:56.350361 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-04-05 12:07:56.350995 | orchestrator | Saturday 05 April 2025 12:07:56 +0000 (0:00:02.289) 0:01:04.909 ******** 2025-04-05 12:08:35.180497 | orchestrator | ok: [testbed-manager] 2025-04-05 12:08:35.182144 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:08:35.182646 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:08:35.182691 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:08:35.182707 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:08:35.182728 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:08:35.184048 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:08:35.184780 | orchestrator | 2025-04-05 12:08:35.185438 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-04-05 12:08:35.186187 | orchestrator | Saturday 05 April 2025 12:08:35 +0000 (0:00:38.833) 0:01:43.743 ******** 2025-04-05 12:09:29.052713 | orchestrator | changed: [testbed-manager] 2025-04-05 12:09:29.053586 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:09:29.053627 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:09:29.056513 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:09:29.056572 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:09:29.056595 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:09:29.056650 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:09:29.056668 | orchestrator | 2025-04-05 12:09:29.056683 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-04-05 12:09:29.056702 | orchestrator | Saturday 05 April 2025 12:09:29 +0000 (0:00:53.875) 0:02:37.618 ******** 2025-04-05 12:09:31.311478 | orchestrator | ok: [testbed-manager] 2025-04-05 12:09:31.312772 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:09:31.312802 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:09:31.312909 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:09:31.314338 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:09:31.314932 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:09:31.315911 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:09:31.317599 | orchestrator | 2025-04-05 12:09:31.319665 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-04-05 12:09:31.320647 | orchestrator | Saturday 05 April 2025 12:09:31 +0000 (0:00:02.257) 0:02:39.876 ******** 2025-04-05 12:09:42.729227 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:09:42.730061 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:09:42.730132 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:09:42.732261 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:09:42.733967 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:09:42.734551 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:09:42.734582 | orchestrator | changed: [testbed-manager] 2025-04-05 12:09:42.735235 | orchestrator | 2025-04-05 12:09:42.735875 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-04-05 12:09:42.736772 | orchestrator | Saturday 05 April 2025 12:09:42 +0000 (0:00:11.417) 0:02:51.293 ******** 2025-04-05 12:09:43.069997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-04-05 12:09:43.071499 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-04-05 12:09:43.071991 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-04-05 12:09:43.073169 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-04-05 12:09:43.075448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-04-05 12:09:43.077020 | orchestrator | 2025-04-05 12:09:43.077664 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-04-05 12:09:43.077695 | orchestrator | Saturday 05 April 2025 12:09:43 +0000 (0:00:00.341) 0:02:51.635 ******** 2025-04-05 12:09:43.123481 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-05 12:09:43.150241 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:09:43.221680 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-05 12:09:43.650394 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:09:43.650595 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-05 12:09:43.650665 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:09:43.652290 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-05 12:09:43.653481 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:09:43.653511 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-05 12:09:43.654451 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-05 12:09:43.654600 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-05 12:09:43.654984 | orchestrator | 2025-04-05 12:09:43.655392 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-04-05 12:09:43.655769 | orchestrator | Saturday 05 April 2025 12:09:43 +0000 (0:00:00.582) 0:02:52.218 ******** 2025-04-05 12:09:43.714400 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-05 12:09:43.750567 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-05 12:09:43.750685 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-05 12:09:43.750704 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-05 12:09:43.750738 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-05 12:09:43.752015 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-05 12:09:43.752087 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-05 12:09:43.752436 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-05 12:09:43.752466 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-05 12:09:43.752486 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-05 12:09:43.780921 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:09:43.876344 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-05 12:09:43.876653 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-05 12:09:43.877198 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-05 12:09:43.877828 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-05 12:09:43.880421 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-05 12:09:43.881175 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-05 12:09:43.881214 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-05 12:09:43.885521 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-05 12:09:43.887275 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-05 12:09:43.887381 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-05 12:09:43.887418 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-05 12:09:43.887698 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-05 12:09:43.889146 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-05 12:09:43.889933 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-05 12:09:43.891574 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-05 12:09:43.892129 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-05 12:09:43.892603 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-05 12:09:43.895192 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-05 12:09:48.500064 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-05 12:09:48.500973 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:09:48.501298 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-05 12:09:48.502475 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:09:48.503507 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-05 12:09:48.505143 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-05 12:09:48.505657 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-05 12:09:48.505911 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-05 12:09:48.506859 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-05 12:09:48.507341 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-05 12:09:48.507984 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-05 12:09:48.508430 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-05 12:09:48.508855 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-05 12:09:48.509642 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-05 12:09:48.509906 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:09:48.510496 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-05 12:09:48.511312 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-05 12:09:48.511694 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-05 12:09:48.512156 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-05 12:09:48.512787 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-05 12:09:48.513208 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-05 12:09:48.513757 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-05 12:09:48.514559 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-05 12:09:48.514877 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-05 12:09:48.515655 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-05 12:09:48.515966 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-05 12:09:48.516443 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-05 12:09:48.516990 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-05 12:09:48.517576 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-05 12:09:48.517995 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-05 12:09:48.518361 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-05 12:09:48.518966 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-05 12:09:48.519240 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-05 12:09:48.519633 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-05 12:09:48.519990 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-05 12:09:48.520275 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-05 12:09:48.520615 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-05 12:09:48.521030 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-05 12:09:48.521317 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-05 12:09:48.521656 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-05 12:09:48.521957 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-05 12:09:48.523222 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-05 12:09:48.523414 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-05 12:09:48.523755 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-05 12:09:48.524110 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-05 12:09:48.524481 | orchestrator | 2025-04-05 12:09:48.524726 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-04-05 12:09:48.525078 | orchestrator | Saturday 05 April 2025 12:09:48 +0000 (0:00:04.847) 0:02:57.065 ******** 2025-04-05 12:09:49.155073 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-05 12:09:49.155713 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-05 12:09:49.156579 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-05 12:09:49.156607 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-05 12:09:49.156627 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-05 12:09:49.156826 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-05 12:09:49.156873 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-05 12:09:49.157446 | orchestrator | 2025-04-05 12:09:49.157476 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-04-05 12:09:49.157565 | orchestrator | Saturday 05 April 2025 12:09:49 +0000 (0:00:00.656) 0:02:57.722 ******** 2025-04-05 12:09:49.216723 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-05 12:09:49.248936 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:09:49.249017 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-05 12:09:49.249340 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-05 12:09:49.277199 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:09:49.305994 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-05 12:09:49.332938 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:09:49.332975 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:09:49.812583 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-05 12:09:49.813171 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-05 12:09:49.815413 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-05 12:09:49.815476 | orchestrator | 2025-04-05 12:09:49.816034 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-04-05 12:09:49.817095 | orchestrator | Saturday 05 April 2025 12:09:49 +0000 (0:00:00.656) 0:02:58.379 ******** 2025-04-05 12:09:49.869920 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-05 12:09:49.896226 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:09:49.897267 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-05 12:09:49.897341 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-05 12:09:49.922608 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:09:49.947632 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:09:49.953587 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-05 12:09:49.970453 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:09:50.489488 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-05 12:09:50.489656 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-05 12:09:50.490270 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-05 12:09:50.490363 | orchestrator | 2025-04-05 12:09:50.491180 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-04-05 12:09:50.492617 | orchestrator | Saturday 05 April 2025 12:09:50 +0000 (0:00:00.678) 0:02:59.057 ******** 2025-04-05 12:09:50.578733 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:09:50.605093 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:09:50.626982 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:09:50.651978 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:09:50.784479 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:09:50.785182 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:09:50.786200 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:09:50.787288 | orchestrator | 2025-04-05 12:09:50.788518 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-04-05 12:09:50.789199 | orchestrator | Saturday 05 April 2025 12:09:50 +0000 (0:00:00.294) 0:02:59.351 ******** 2025-04-05 12:09:55.993663 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:09:55.996271 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:09:55.998673 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:09:55.999763 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:09:56.002652 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:09:56.003141 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:09:56.004297 | orchestrator | ok: [testbed-manager] 2025-04-05 12:09:56.004867 | orchestrator | 2025-04-05 12:09:56.005485 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-04-05 12:09:56.006098 | orchestrator | Saturday 05 April 2025 12:09:55 +0000 (0:00:05.208) 0:03:04.560 ******** 2025-04-05 12:09:56.067067 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-04-05 12:09:56.067308 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-04-05 12:09:56.121286 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:09:56.122379 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-04-05 12:09:56.160484 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:09:56.208787 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:09:56.211233 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-04-05 12:09:56.215643 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-04-05 12:09:56.248147 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:09:56.321211 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:09:56.321758 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-04-05 12:09:56.322661 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:09:56.323153 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-04-05 12:09:56.324040 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:09:56.327917 | orchestrator | 2025-04-05 12:09:56.328251 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-04-05 12:09:56.328803 | orchestrator | Saturday 05 April 2025 12:09:56 +0000 (0:00:00.328) 0:03:04.888 ******** 2025-04-05 12:09:57.673517 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-04-05 12:09:57.677358 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-04-05 12:09:57.677401 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-04-05 12:09:57.680905 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-04-05 12:09:57.681417 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-04-05 12:09:57.681641 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-04-05 12:09:57.681890 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-04-05 12:09:57.682417 | orchestrator | 2025-04-05 12:09:57.682798 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-04-05 12:09:57.683261 | orchestrator | Saturday 05 April 2025 12:09:57 +0000 (0:00:01.349) 0:03:06.238 ******** 2025-04-05 12:09:58.083534 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:09:58.083802 | orchestrator | 2025-04-05 12:09:58.084235 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-04-05 12:09:58.084547 | orchestrator | Saturday 05 April 2025 12:09:58 +0000 (0:00:00.412) 0:03:06.650 ******** 2025-04-05 12:09:59.526675 | orchestrator | ok: [testbed-manager] 2025-04-05 12:09:59.527259 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:09:59.528088 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:09:59.528747 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:09:59.528978 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:09:59.530489 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:09:59.531089 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:09:59.531497 | orchestrator | 2025-04-05 12:09:59.531817 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-04-05 12:09:59.533220 | orchestrator | Saturday 05 April 2025 12:09:59 +0000 (0:00:01.441) 0:03:08.091 ******** 2025-04-05 12:10:00.165762 | orchestrator | ok: [testbed-manager] 2025-04-05 12:10:00.167008 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:10:00.167048 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:10:00.167421 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:10:00.168649 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:10:00.169243 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:10:00.169271 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:10:00.170131 | orchestrator | 2025-04-05 12:10:00.170772 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-04-05 12:10:00.171254 | orchestrator | Saturday 05 April 2025 12:10:00 +0000 (0:00:00.639) 0:03:08.731 ******** 2025-04-05 12:10:00.871961 | orchestrator | changed: [testbed-manager] 2025-04-05 12:10:00.873926 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:10:00.874979 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:10:00.875037 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:10:00.875062 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:10:00.876815 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:10:00.876869 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:10:00.877575 | orchestrator | 2025-04-05 12:10:00.878087 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-04-05 12:10:00.878622 | orchestrator | Saturday 05 April 2025 12:10:00 +0000 (0:00:00.705) 0:03:09.437 ******** 2025-04-05 12:10:01.526378 | orchestrator | ok: [testbed-manager] 2025-04-05 12:10:01.527892 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:10:01.528014 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:10:01.529329 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:10:01.529620 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:10:01.530603 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:10:01.531714 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:10:01.532402 | orchestrator | 2025-04-05 12:10:01.533405 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-04-05 12:10:01.533677 | orchestrator | Saturday 05 April 2025 12:10:01 +0000 (0:00:00.656) 0:03:10.094 ******** 2025-04-05 12:10:02.592739 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743853354.985255, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:10:02.594427 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743853330.8313, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:10:02.595247 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743853343.9694917, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:10:02.595911 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743853337.4892008, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:10:02.597597 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743853338.5111208, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:10:02.598228 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743853346.1337876, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:10:02.598771 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743853341.1594112, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:10:02.599330 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743853383.7355793, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:10:02.599854 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743853281.6621842, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:10:02.600482 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743853294.6188502, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:10:02.600976 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743853288.3947964, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:10:02.601738 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743853289.8739462, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:10:02.602237 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743853296.4939282, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:10:02.602473 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743853290.6512308, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:10:02.602902 | orchestrator | 2025-04-05 12:10:02.603354 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-04-05 12:10:02.604553 | orchestrator | Saturday 05 April 2025 12:10:02 +0000 (0:00:01.062) 0:03:11.156 ******** 2025-04-05 12:10:03.710268 | orchestrator | changed: [testbed-manager] 2025-04-05 12:10:03.712336 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:10:03.712917 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:10:03.712945 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:10:03.712967 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:10:03.714096 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:10:03.714810 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:10:03.715481 | orchestrator | 2025-04-05 12:10:03.716568 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-04-05 12:10:03.717782 | orchestrator | Saturday 05 April 2025 12:10:03 +0000 (0:00:01.120) 0:03:12.276 ******** 2025-04-05 12:10:05.019309 | orchestrator | changed: [testbed-manager] 2025-04-05 12:10:05.020286 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:10:05.020600 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:10:05.022391 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:10:05.023281 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:10:05.024250 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:10:05.025080 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:10:05.025677 | orchestrator | 2025-04-05 12:10:05.026374 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-04-05 12:10:05.027002 | orchestrator | Saturday 05 April 2025 12:10:05 +0000 (0:00:01.309) 0:03:13.586 ******** 2025-04-05 12:10:06.478619 | orchestrator | changed: [testbed-manager] 2025-04-05 12:10:06.478810 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:10:06.481747 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:10:06.482771 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:10:06.482801 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:10:06.482820 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:10:06.483484 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:10:06.484140 | orchestrator | 2025-04-05 12:10:06.484745 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-04-05 12:10:06.485398 | orchestrator | Saturday 05 April 2025 12:10:06 +0000 (0:00:01.457) 0:03:15.043 ******** 2025-04-05 12:10:06.557099 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:10:06.593107 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:10:06.636223 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:10:06.687085 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:10:06.725452 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:10:06.786978 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:10:06.790396 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:10:07.537269 | orchestrator | 2025-04-05 12:10:07.537370 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-04-05 12:10:07.537387 | orchestrator | Saturday 05 April 2025 12:10:06 +0000 (0:00:00.311) 0:03:15.354 ******** 2025-04-05 12:10:07.537416 | orchestrator | ok: [testbed-manager] 2025-04-05 12:10:07.539068 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:10:07.540363 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:10:07.540396 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:10:07.541283 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:10:07.542491 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:10:07.543567 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:10:07.544090 | orchestrator | 2025-04-05 12:10:07.544583 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-04-05 12:10:07.545616 | orchestrator | Saturday 05 April 2025 12:10:07 +0000 (0:00:00.748) 0:03:16.103 ******** 2025-04-05 12:10:07.933897 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:10:07.934408 | orchestrator | 2025-04-05 12:10:07.936078 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-04-05 12:10:07.936882 | orchestrator | Saturday 05 April 2025 12:10:07 +0000 (0:00:00.395) 0:03:16.499 ******** 2025-04-05 12:10:16.310305 | orchestrator | ok: [testbed-manager] 2025-04-05 12:10:16.311262 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:10:16.311302 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:10:16.312688 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:10:16.313755 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:10:16.314095 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:10:16.314705 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:10:16.315771 | orchestrator | 2025-04-05 12:10:16.316425 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-04-05 12:10:16.317332 | orchestrator | Saturday 05 April 2025 12:10:16 +0000 (0:00:08.377) 0:03:24.876 ******** 2025-04-05 12:10:17.461042 | orchestrator | ok: [testbed-manager] 2025-04-05 12:10:17.462302 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:10:17.462683 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:10:17.463429 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:10:17.464592 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:10:17.465483 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:10:17.466198 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:10:17.466940 | orchestrator | 2025-04-05 12:10:17.467677 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-04-05 12:10:17.468293 | orchestrator | Saturday 05 April 2025 12:10:17 +0000 (0:00:01.149) 0:03:26.026 ******** 2025-04-05 12:10:18.490151 | orchestrator | ok: [testbed-manager] 2025-04-05 12:10:18.491265 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:10:18.492426 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:10:18.493345 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:10:18.494195 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:10:18.495064 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:10:18.495922 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:10:18.496752 | orchestrator | 2025-04-05 12:10:18.497616 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-04-05 12:10:18.499413 | orchestrator | Saturday 05 April 2025 12:10:18 +0000 (0:00:01.030) 0:03:27.056 ******** 2025-04-05 12:10:18.827335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:10:18.827584 | orchestrator | 2025-04-05 12:10:18.828447 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-04-05 12:10:18.829300 | orchestrator | Saturday 05 April 2025 12:10:18 +0000 (0:00:00.338) 0:03:27.395 ******** 2025-04-05 12:10:27.509580 | orchestrator | changed: [testbed-manager] 2025-04-05 12:10:27.510168 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:10:27.510620 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:10:27.511533 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:10:27.512137 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:10:27.512669 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:10:27.513261 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:10:27.513756 | orchestrator | 2025-04-05 12:10:27.515179 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-04-05 12:10:28.140549 | orchestrator | Saturday 05 April 2025 12:10:27 +0000 (0:00:08.679) 0:03:36.075 ******** 2025-04-05 12:10:28.140712 | orchestrator | changed: [testbed-manager] 2025-04-05 12:10:28.140785 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:10:28.140804 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:10:28.140822 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:10:28.141600 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:10:28.141795 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:10:28.141824 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:10:28.142087 | orchestrator | 2025-04-05 12:10:28.142980 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-04-05 12:10:28.143083 | orchestrator | Saturday 05 April 2025 12:10:28 +0000 (0:00:00.630) 0:03:36.705 ******** 2025-04-05 12:10:29.316388 | orchestrator | changed: [testbed-manager] 2025-04-05 12:10:29.316648 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:10:29.317087 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:10:29.317293 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:10:29.317745 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:10:29.318503 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:10:29.319497 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:10:29.319657 | orchestrator | 2025-04-05 12:10:29.320171 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-04-05 12:10:29.321428 | orchestrator | Saturday 05 April 2025 12:10:29 +0000 (0:00:01.176) 0:03:37.882 ******** 2025-04-05 12:10:30.448034 | orchestrator | changed: [testbed-manager] 2025-04-05 12:10:30.448191 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:10:30.448999 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:10:30.450005 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:10:30.450470 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:10:30.451281 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:10:30.451564 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:10:30.452425 | orchestrator | 2025-04-05 12:10:30.453871 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-04-05 12:10:30.454063 | orchestrator | Saturday 05 April 2025 12:10:30 +0000 (0:00:01.132) 0:03:39.014 ******** 2025-04-05 12:10:30.550330 | orchestrator | ok: [testbed-manager] 2025-04-05 12:10:30.598097 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:10:30.629681 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:10:30.664560 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:10:30.736686 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:10:30.737267 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:10:30.738646 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:10:30.739171 | orchestrator | 2025-04-05 12:10:30.739935 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-04-05 12:10:30.741414 | orchestrator | Saturday 05 April 2025 12:10:30 +0000 (0:00:00.290) 0:03:39.304 ******** 2025-04-05 12:10:30.835409 | orchestrator | ok: [testbed-manager] 2025-04-05 12:10:30.875272 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:10:30.906556 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:10:30.941027 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:10:31.010754 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:10:31.011244 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:10:31.012293 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:10:31.013047 | orchestrator | 2025-04-05 12:10:31.013973 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-04-05 12:10:31.014639 | orchestrator | Saturday 05 April 2025 12:10:31 +0000 (0:00:00.274) 0:03:39.579 ******** 2025-04-05 12:10:31.135812 | orchestrator | ok: [testbed-manager] 2025-04-05 12:10:31.172233 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:10:31.201852 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:10:31.235927 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:10:31.318600 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:10:31.319173 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:10:31.320216 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:10:31.320924 | orchestrator | 2025-04-05 12:10:31.321945 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-04-05 12:10:31.322278 | orchestrator | Saturday 05 April 2025 12:10:31 +0000 (0:00:00.306) 0:03:39.885 ******** 2025-04-05 12:10:35.581577 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:10:35.581755 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:10:35.582259 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:10:35.582913 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:10:35.586144 | orchestrator | ok: [testbed-manager] 2025-04-05 12:10:35.586751 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:10:35.587449 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:10:35.588261 | orchestrator | 2025-04-05 12:10:35.589158 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-04-05 12:10:35.589643 | orchestrator | Saturday 05 April 2025 12:10:35 +0000 (0:00:04.263) 0:03:44.149 ******** 2025-04-05 12:10:35.964813 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:10:35.965053 | orchestrator | 2025-04-05 12:10:35.965721 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-04-05 12:10:35.966556 | orchestrator | Saturday 05 April 2025 12:10:35 +0000 (0:00:00.383) 0:03:44.532 ******** 2025-04-05 12:10:36.039800 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-04-05 12:10:36.040008 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-04-05 12:10:36.040550 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-04-05 12:10:36.077539 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:10:36.077863 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-04-05 12:10:36.118430 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-04-05 12:10:36.119376 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:10:36.119455 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-04-05 12:10:36.157574 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-04-05 12:10:36.158617 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:10:36.158640 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-04-05 12:10:36.159475 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-04-05 12:10:36.159722 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-04-05 12:10:36.200105 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:10:36.200175 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-04-05 12:10:36.201323 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-04-05 12:10:36.270323 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:10:36.270716 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:10:36.271778 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-04-05 12:10:36.272587 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-04-05 12:10:36.272997 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:10:36.273498 | orchestrator | 2025-04-05 12:10:36.273974 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-04-05 12:10:36.275132 | orchestrator | Saturday 05 April 2025 12:10:36 +0000 (0:00:00.306) 0:03:44.838 ******** 2025-04-05 12:10:36.627498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:10:36.627648 | orchestrator | 2025-04-05 12:10:36.628355 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-04-05 12:10:36.628788 | orchestrator | Saturday 05 April 2025 12:10:36 +0000 (0:00:00.355) 0:03:45.194 ******** 2025-04-05 12:10:36.677712 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-04-05 12:10:36.823378 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:10:36.825631 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-04-05 12:10:36.864117 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:10:36.905806 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-04-05 12:10:36.941230 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-04-05 12:10:36.942099 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:10:36.980800 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-04-05 12:10:36.981588 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:10:36.983067 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-04-05 12:10:37.043876 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:10:37.044260 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:10:37.045046 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-04-05 12:10:37.046251 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:10:37.046595 | orchestrator | 2025-04-05 12:10:37.047234 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-04-05 12:10:37.047646 | orchestrator | Saturday 05 April 2025 12:10:37 +0000 (0:00:00.416) 0:03:45.611 ******** 2025-04-05 12:10:37.417857 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:10:37.418904 | orchestrator | 2025-04-05 12:10:37.418944 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-04-05 12:10:37.420654 | orchestrator | Saturday 05 April 2025 12:10:37 +0000 (0:00:00.371) 0:03:45.983 ******** 2025-04-05 12:11:07.259533 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:11:07.259727 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:11:07.259753 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:11:07.259768 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:11:07.259789 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:11:07.259989 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:11:07.262406 | orchestrator | changed: [testbed-manager] 2025-04-05 12:11:07.263160 | orchestrator | 2025-04-05 12:11:07.263761 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-04-05 12:11:07.264589 | orchestrator | Saturday 05 April 2025 12:11:07 +0000 (0:00:29.839) 0:04:15.823 ******** 2025-04-05 12:11:15.710240 | orchestrator | changed: [testbed-manager] 2025-04-05 12:11:15.710521 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:11:15.711641 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:11:15.713227 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:11:15.713386 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:11:15.713808 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:11:15.714315 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:11:15.714776 | orchestrator | 2025-04-05 12:11:15.715213 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-04-05 12:11:15.715625 | orchestrator | Saturday 05 April 2025 12:11:15 +0000 (0:00:08.453) 0:04:24.277 ******** 2025-04-05 12:11:24.138618 | orchestrator | changed: [testbed-manager] 2025-04-05 12:11:24.139076 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:11:24.139119 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:11:24.141736 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:11:24.141997 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:11:24.142069 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:11:24.142087 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:11:24.142106 | orchestrator | 2025-04-05 12:11:24.143281 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-04-05 12:11:24.143727 | orchestrator | Saturday 05 April 2025 12:11:24 +0000 (0:00:08.426) 0:04:32.703 ******** 2025-04-05 12:11:26.389047 | orchestrator | ok: [testbed-manager] 2025-04-05 12:11:26.389230 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:11:26.389593 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:11:26.391084 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:11:26.392482 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:11:26.393487 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:11:26.393784 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:11:26.393812 | orchestrator | 2025-04-05 12:11:26.394311 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-04-05 12:11:26.394718 | orchestrator | Saturday 05 April 2025 12:11:26 +0000 (0:00:02.251) 0:04:34.955 ******** 2025-04-05 12:11:33.030126 | orchestrator | changed: [testbed-manager] 2025-04-05 12:11:33.030301 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:11:33.030327 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:11:33.030840 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:11:33.032618 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:11:33.032774 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:11:33.033967 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:11:33.034569 | orchestrator | 2025-04-05 12:11:33.035963 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-04-05 12:11:33.036489 | orchestrator | Saturday 05 April 2025 12:11:33 +0000 (0:00:06.641) 0:04:41.596 ******** 2025-04-05 12:11:33.380046 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:11:33.380760 | orchestrator | 2025-04-05 12:11:33.381429 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-04-05 12:11:33.381876 | orchestrator | Saturday 05 April 2025 12:11:33 +0000 (0:00:00.351) 0:04:41.948 ******** 2025-04-05 12:11:34.108575 | orchestrator | changed: [testbed-manager] 2025-04-05 12:11:34.109782 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:11:34.111261 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:11:34.112087 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:11:34.112238 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:11:34.112480 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:11:34.112755 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:11:34.113635 | orchestrator | 2025-04-05 12:11:34.116536 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-04-05 12:11:34.117604 | orchestrator | Saturday 05 April 2025 12:11:34 +0000 (0:00:00.721) 0:04:42.669 ******** 2025-04-05 12:11:36.180948 | orchestrator | ok: [testbed-manager] 2025-04-05 12:11:36.181221 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:11:36.183064 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:11:36.184650 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:11:36.185490 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:11:36.186900 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:11:36.187884 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:11:36.188866 | orchestrator | 2025-04-05 12:11:36.190879 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-04-05 12:11:36.191185 | orchestrator | Saturday 05 April 2025 12:11:36 +0000 (0:00:02.078) 0:04:44.748 ******** 2025-04-05 12:11:36.959712 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:11:36.960155 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:11:36.960931 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:11:36.962490 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:11:36.963117 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:11:36.963944 | orchestrator | changed: [testbed-manager] 2025-04-05 12:11:36.964484 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:11:36.964942 | orchestrator | 2025-04-05 12:11:36.965655 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-04-05 12:11:36.966123 | orchestrator | Saturday 05 April 2025 12:11:36 +0000 (0:00:00.778) 0:04:45.526 ******** 2025-04-05 12:11:37.029155 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:11:37.060046 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:11:37.091430 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:11:37.120992 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:11:37.149359 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:11:37.199281 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:11:37.199683 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:11:37.199712 | orchestrator | 2025-04-05 12:11:37.202474 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-04-05 12:11:37.203200 | orchestrator | Saturday 05 April 2025 12:11:37 +0000 (0:00:00.240) 0:04:45.767 ******** 2025-04-05 12:11:37.256634 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:11:37.300013 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:11:37.330806 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:11:37.360363 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:11:37.389518 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:11:37.559586 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:11:37.560109 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:11:37.560906 | orchestrator | 2025-04-05 12:11:37.561714 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-04-05 12:11:37.562415 | orchestrator | Saturday 05 April 2025 12:11:37 +0000 (0:00:00.359) 0:04:46.126 ******** 2025-04-05 12:11:37.642666 | orchestrator | ok: [testbed-manager] 2025-04-05 12:11:37.676708 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:11:37.709919 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:11:37.784150 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:11:37.853276 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:11:37.853623 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:11:37.854425 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:11:37.857708 | orchestrator | 2025-04-05 12:11:37.858217 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-04-05 12:11:37.859545 | orchestrator | Saturday 05 April 2025 12:11:37 +0000 (0:00:00.295) 0:04:46.422 ******** 2025-04-05 12:11:37.957945 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:11:37.986802 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:11:38.018782 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:11:38.062184 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:11:38.224971 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:11:38.225709 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:11:38.226812 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:11:38.227792 | orchestrator | 2025-04-05 12:11:38.228980 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-04-05 12:11:38.229904 | orchestrator | Saturday 05 April 2025 12:11:38 +0000 (0:00:00.368) 0:04:46.790 ******** 2025-04-05 12:11:38.328471 | orchestrator | ok: [testbed-manager] 2025-04-05 12:11:38.363749 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:11:38.394917 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:11:38.428412 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:11:38.504271 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:11:38.505528 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:11:38.506253 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:11:38.507081 | orchestrator | 2025-04-05 12:11:38.507326 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-04-05 12:11:38.507726 | orchestrator | Saturday 05 April 2025 12:11:38 +0000 (0:00:00.281) 0:04:47.072 ******** 2025-04-05 12:11:38.607231 | orchestrator | ok: [testbed-manager] =>  2025-04-05 12:11:38.607377 | orchestrator |  docker_version: 5:27.5.1 2025-04-05 12:11:38.636959 | orchestrator | ok: [testbed-node-0] =>  2025-04-05 12:11:38.637073 | orchestrator |  docker_version: 5:27.5.1 2025-04-05 12:11:38.687564 | orchestrator | ok: [testbed-node-1] =>  2025-04-05 12:11:38.687720 | orchestrator |  docker_version: 5:27.5.1 2025-04-05 12:11:38.722455 | orchestrator | ok: [testbed-node-2] =>  2025-04-05 12:11:38.723572 | orchestrator |  docker_version: 5:27.5.1 2025-04-05 12:11:38.810706 | orchestrator | ok: [testbed-node-3] =>  2025-04-05 12:11:38.811425 | orchestrator |  docker_version: 5:27.5.1 2025-04-05 12:11:38.812878 | orchestrator | ok: [testbed-node-4] =>  2025-04-05 12:11:38.813810 | orchestrator |  docker_version: 5:27.5.1 2025-04-05 12:11:38.814623 | orchestrator | ok: [testbed-node-5] =>  2025-04-05 12:11:38.815368 | orchestrator |  docker_version: 5:27.5.1 2025-04-05 12:11:38.816074 | orchestrator | 2025-04-05 12:11:38.817271 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-04-05 12:11:38.817551 | orchestrator | Saturday 05 April 2025 12:11:38 +0000 (0:00:00.304) 0:04:47.377 ******** 2025-04-05 12:11:38.907684 | orchestrator | ok: [testbed-manager] =>  2025-04-05 12:11:38.908286 | orchestrator |  docker_cli_version: 5:27.5.1 2025-04-05 12:11:38.937420 | orchestrator | ok: [testbed-node-0] =>  2025-04-05 12:11:38.938202 | orchestrator |  docker_cli_version: 5:27.5.1 2025-04-05 12:11:38.966530 | orchestrator | ok: [testbed-node-1] =>  2025-04-05 12:11:38.967298 | orchestrator |  docker_cli_version: 5:27.5.1 2025-04-05 12:11:38.995559 | orchestrator | ok: [testbed-node-2] =>  2025-04-05 12:11:38.996235 | orchestrator |  docker_cli_version: 5:27.5.1 2025-04-05 12:11:39.049498 | orchestrator | ok: [testbed-node-3] =>  2025-04-05 12:11:39.050498 | orchestrator |  docker_cli_version: 5:27.5.1 2025-04-05 12:11:39.052123 | orchestrator | ok: [testbed-node-4] =>  2025-04-05 12:11:39.053202 | orchestrator |  docker_cli_version: 5:27.5.1 2025-04-05 12:11:39.053934 | orchestrator | ok: [testbed-node-5] =>  2025-04-05 12:11:39.054659 | orchestrator |  docker_cli_version: 5:27.5.1 2025-04-05 12:11:39.055362 | orchestrator | 2025-04-05 12:11:39.055902 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-04-05 12:11:39.056482 | orchestrator | Saturday 05 April 2025 12:11:39 +0000 (0:00:00.240) 0:04:47.618 ******** 2025-04-05 12:11:39.154151 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:11:39.182936 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:11:39.219347 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:11:39.248653 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:11:39.298585 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:11:39.298729 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:11:39.299037 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:11:39.299065 | orchestrator | 2025-04-05 12:11:39.299315 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-04-05 12:11:39.299685 | orchestrator | Saturday 05 April 2025 12:11:39 +0000 (0:00:00.248) 0:04:47.866 ******** 2025-04-05 12:11:39.374179 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:11:39.404174 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:11:39.440917 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:11:39.512357 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:11:39.561326 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:11:39.561865 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:11:39.565240 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:11:39.565675 | orchestrator | 2025-04-05 12:11:39.565702 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-04-05 12:11:39.565723 | orchestrator | Saturday 05 April 2025 12:11:39 +0000 (0:00:00.263) 0:04:48.129 ******** 2025-04-05 12:11:40.001493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:11:40.001666 | orchestrator | 2025-04-05 12:11:40.002225 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-04-05 12:11:40.002582 | orchestrator | Saturday 05 April 2025 12:11:39 +0000 (0:00:00.439) 0:04:48.569 ******** 2025-04-05 12:11:40.936875 | orchestrator | ok: [testbed-manager] 2025-04-05 12:11:40.937286 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:11:40.937327 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:11:40.938896 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:11:40.939316 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:11:40.939347 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:11:40.939703 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:11:40.940636 | orchestrator | 2025-04-05 12:11:40.941143 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-04-05 12:11:40.941676 | orchestrator | Saturday 05 April 2025 12:11:40 +0000 (0:00:00.933) 0:04:49.503 ******** 2025-04-05 12:11:44.118421 | orchestrator | ok: [testbed-manager] 2025-04-05 12:11:44.119244 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:11:44.126259 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:11:44.126377 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:11:44.126393 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:11:44.126403 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:11:44.126412 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:11:44.126424 | orchestrator | 2025-04-05 12:11:44.127073 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-04-05 12:11:44.127732 | orchestrator | Saturday 05 April 2025 12:11:44 +0000 (0:00:03.182) 0:04:52.685 ******** 2025-04-05 12:11:44.186493 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-04-05 12:11:44.258513 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-04-05 12:11:44.258755 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-04-05 12:11:44.259335 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-04-05 12:11:44.259869 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-04-05 12:11:44.260550 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-04-05 12:11:44.332366 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:11:44.332430 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-04-05 12:11:44.333229 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-04-05 12:11:44.333515 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-04-05 12:11:44.410397 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:11:44.410556 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-04-05 12:11:44.411317 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-04-05 12:11:44.413262 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-04-05 12:11:44.481784 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:11:44.483514 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-04-05 12:11:44.485796 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-04-05 12:11:44.486129 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-04-05 12:11:44.556862 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:11:44.557232 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-04-05 12:11:44.557257 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-04-05 12:11:44.557276 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-04-05 12:11:44.692016 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:11:44.693217 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:11:44.693732 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-04-05 12:11:44.697355 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-04-05 12:11:44.697436 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-04-05 12:11:44.697455 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:11:44.697469 | orchestrator | 2025-04-05 12:11:44.697484 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-04-05 12:11:44.697524 | orchestrator | Saturday 05 April 2025 12:11:44 +0000 (0:00:00.573) 0:04:53.258 ******** 2025-04-05 12:11:51.678953 | orchestrator | ok: [testbed-manager] 2025-04-05 12:11:51.680427 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:11:51.681220 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:11:51.682844 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:11:51.683667 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:11:51.684662 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:11:51.685241 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:11:51.685810 | orchestrator | 2025-04-05 12:11:51.686878 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-04-05 12:11:51.687276 | orchestrator | Saturday 05 April 2025 12:11:51 +0000 (0:00:06.985) 0:05:00.244 ******** 2025-04-05 12:11:52.710919 | orchestrator | ok: [testbed-manager] 2025-04-05 12:11:52.711846 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:11:52.712033 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:11:52.712257 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:11:52.712655 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:11:52.713908 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:11:52.715032 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:11:52.716135 | orchestrator | 2025-04-05 12:11:52.716857 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-04-05 12:11:52.717486 | orchestrator | Saturday 05 April 2025 12:11:52 +0000 (0:00:01.034) 0:05:01.278 ******** 2025-04-05 12:12:00.764678 | orchestrator | ok: [testbed-manager] 2025-04-05 12:12:00.764907 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:12:00.765322 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:12:00.765936 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:12:00.766348 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:12:00.766376 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:12:00.767479 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:12:00.767661 | orchestrator | 2025-04-05 12:12:00.768067 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-04-05 12:12:00.769003 | orchestrator | Saturday 05 April 2025 12:12:00 +0000 (0:00:08.051) 0:05:09.330 ******** 2025-04-05 12:12:03.402191 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:12:03.402368 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:12:03.403450 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:12:03.404379 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:12:03.406181 | orchestrator | changed: [testbed-manager] 2025-04-05 12:12:03.407508 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:12:03.409143 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:12:03.409466 | orchestrator | 2025-04-05 12:12:03.410610 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-04-05 12:12:03.411230 | orchestrator | Saturday 05 April 2025 12:12:03 +0000 (0:00:02.636) 0:05:11.967 ******** 2025-04-05 12:12:04.637387 | orchestrator | ok: [testbed-manager] 2025-04-05 12:12:04.638012 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:12:04.638375 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:12:04.638547 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:12:04.638968 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:12:04.639205 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:12:04.639412 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:12:04.639775 | orchestrator | 2025-04-05 12:12:04.639930 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-04-05 12:12:04.640229 | orchestrator | Saturday 05 April 2025 12:12:04 +0000 (0:00:01.238) 0:05:13.205 ******** 2025-04-05 12:12:06.039703 | orchestrator | ok: [testbed-manager] 2025-04-05 12:12:06.041587 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:12:06.041627 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:12:06.041643 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:12:06.041657 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:12:06.041671 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:12:06.041692 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:12:06.041754 | orchestrator | 2025-04-05 12:12:06.042171 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-04-05 12:12:06.042213 | orchestrator | Saturday 05 April 2025 12:12:06 +0000 (0:00:01.394) 0:05:14.599 ******** 2025-04-05 12:12:06.232684 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:12:06.296376 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:12:06.368110 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:12:06.428626 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:12:06.612377 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:12:06.613551 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:12:06.613602 | orchestrator | changed: [testbed-manager] 2025-04-05 12:12:06.614545 | orchestrator | 2025-04-05 12:12:06.614935 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-04-05 12:12:06.615692 | orchestrator | Saturday 05 April 2025 12:12:06 +0000 (0:00:00.575) 0:05:15.175 ******** 2025-04-05 12:12:15.774862 | orchestrator | ok: [testbed-manager] 2025-04-05 12:12:15.775060 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:12:15.776503 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:12:15.777586 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:12:15.777613 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:12:15.778746 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:12:15.779596 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:12:15.780432 | orchestrator | 2025-04-05 12:12:15.781686 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-04-05 12:12:15.782645 | orchestrator | Saturday 05 April 2025 12:12:15 +0000 (0:00:09.165) 0:05:24.341 ******** 2025-04-05 12:12:16.299178 | orchestrator | changed: [testbed-manager] 2025-04-05 12:12:16.367929 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:12:16.918572 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:12:16.919985 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:12:16.921037 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:12:16.921048 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:12:16.921154 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:12:16.921683 | orchestrator | 2025-04-05 12:12:16.922442 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-04-05 12:12:16.922981 | orchestrator | Saturday 05 April 2025 12:12:16 +0000 (0:00:01.142) 0:05:25.483 ******** 2025-04-05 12:12:25.184615 | orchestrator | ok: [testbed-manager] 2025-04-05 12:12:25.185481 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:12:25.185848 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:12:25.187321 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:12:25.187611 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:12:25.189771 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:12:25.190389 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:12:25.191297 | orchestrator | 2025-04-05 12:12:25.191645 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-04-05 12:12:25.192349 | orchestrator | Saturday 05 April 2025 12:12:25 +0000 (0:00:08.262) 0:05:33.746 ******** 2025-04-05 12:12:35.127883 | orchestrator | ok: [testbed-manager] 2025-04-05 12:12:35.128081 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:12:35.128116 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:12:35.128454 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:12:35.128701 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:12:35.129236 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:12:35.129729 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:12:35.130222 | orchestrator | 2025-04-05 12:12:35.130444 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-04-05 12:12:35.130940 | orchestrator | Saturday 05 April 2025 12:12:35 +0000 (0:00:09.943) 0:05:43.689 ******** 2025-04-05 12:12:35.521382 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-04-05 12:12:36.269418 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-04-05 12:12:36.273956 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-04-05 12:12:36.274132 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-04-05 12:12:36.275152 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-04-05 12:12:36.275582 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-04-05 12:12:36.276594 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-04-05 12:12:36.276620 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-04-05 12:12:36.277513 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-04-05 12:12:36.277748 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-04-05 12:12:36.278543 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-04-05 12:12:36.279247 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-04-05 12:12:36.280050 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-04-05 12:12:36.280487 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-04-05 12:12:36.281131 | orchestrator | 2025-04-05 12:12:36.281391 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-04-05 12:12:36.281787 | orchestrator | Saturday 05 April 2025 12:12:36 +0000 (0:00:01.146) 0:05:44.835 ******** 2025-04-05 12:12:36.397240 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:12:36.462293 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:12:36.521857 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:12:36.581325 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:12:36.645859 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:12:36.757506 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:12:36.757990 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:12:36.758715 | orchestrator | 2025-04-05 12:12:36.759115 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-04-05 12:12:36.759266 | orchestrator | Saturday 05 April 2025 12:12:36 +0000 (0:00:00.489) 0:05:45.324 ******** 2025-04-05 12:12:41.517547 | orchestrator | ok: [testbed-manager] 2025-04-05 12:12:41.518510 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:12:41.520637 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:12:41.521094 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:12:41.522124 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:12:41.523025 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:12:41.523396 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:12:41.524135 | orchestrator | 2025-04-05 12:12:41.525390 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-04-05 12:12:41.525694 | orchestrator | Saturday 05 April 2025 12:12:41 +0000 (0:00:04.757) 0:05:50.082 ******** 2025-04-05 12:12:41.653605 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:12:41.714475 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:12:41.781169 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:12:41.841963 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:12:41.906313 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:12:42.017564 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:12:42.019280 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:12:42.019873 | orchestrator | 2025-04-05 12:12:42.020895 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-04-05 12:12:42.022325 | orchestrator | Saturday 05 April 2025 12:12:42 +0000 (0:00:00.502) 0:05:50.585 ******** 2025-04-05 12:12:42.090865 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-04-05 12:12:42.093523 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-04-05 12:12:42.156678 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:12:42.157246 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-04-05 12:12:42.158247 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-04-05 12:12:42.227607 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:12:42.228532 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-04-05 12:12:42.229332 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-04-05 12:12:42.293467 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:12:42.293770 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-04-05 12:12:42.294961 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-04-05 12:12:42.360959 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:12:42.361646 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-04-05 12:12:42.362162 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-04-05 12:12:42.435033 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:12:42.435726 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-04-05 12:12:42.437275 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-04-05 12:12:42.548802 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:12:42.549643 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-04-05 12:12:42.550608 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-04-05 12:12:42.554072 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:12:42.554159 | orchestrator | 2025-04-05 12:12:42.676860 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-04-05 12:12:42.676917 | orchestrator | Saturday 05 April 2025 12:12:42 +0000 (0:00:00.530) 0:05:51.115 ******** 2025-04-05 12:12:42.676942 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:12:42.738205 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:12:42.797966 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:12:42.863484 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:12:42.921855 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:12:43.020308 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:12:43.020841 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:12:43.024429 | orchestrator | 2025-04-05 12:12:43.148537 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-04-05 12:12:43.148587 | orchestrator | Saturday 05 April 2025 12:12:43 +0000 (0:00:00.471) 0:05:51.586 ******** 2025-04-05 12:12:43.148629 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:12:43.217775 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:12:43.281441 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:12:43.341384 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:12:43.399742 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:12:43.629549 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:12:43.630666 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:12:43.631603 | orchestrator | 2025-04-05 12:12:43.632801 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-04-05 12:12:43.633771 | orchestrator | Saturday 05 April 2025 12:12:43 +0000 (0:00:00.608) 0:05:52.195 ******** 2025-04-05 12:12:43.756214 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:12:43.825470 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:12:43.901920 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:12:43.964784 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:12:44.048396 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:12:44.152245 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:12:44.153295 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:12:44.154164 | orchestrator | 2025-04-05 12:12:44.155340 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-04-05 12:12:44.156468 | orchestrator | Saturday 05 April 2025 12:12:44 +0000 (0:00:00.523) 0:05:52.718 ******** 2025-04-05 12:12:46.126801 | orchestrator | ok: [testbed-manager] 2025-04-05 12:12:46.127527 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:12:46.127837 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:12:46.128322 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:12:46.128650 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:12:46.129533 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:12:46.129771 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:12:46.132198 | orchestrator | 2025-04-05 12:12:46.132699 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-04-05 12:12:46.132996 | orchestrator | Saturday 05 April 2025 12:12:46 +0000 (0:00:01.973) 0:05:54.691 ******** 2025-04-05 12:12:46.951372 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:12:46.953681 | orchestrator | 2025-04-05 12:12:46.954991 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-04-05 12:12:46.956024 | orchestrator | Saturday 05 April 2025 12:12:46 +0000 (0:00:00.826) 0:05:55.518 ******** 2025-04-05 12:12:47.529279 | orchestrator | ok: [testbed-manager] 2025-04-05 12:12:48.011163 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:12:48.012158 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:12:48.013417 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:12:48.014515 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:12:48.015338 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:12:48.016306 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:12:48.016757 | orchestrator | 2025-04-05 12:12:48.017645 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-04-05 12:12:48.018115 | orchestrator | Saturday 05 April 2025 12:12:48 +0000 (0:00:01.058) 0:05:56.576 ******** 2025-04-05 12:12:48.418316 | orchestrator | ok: [testbed-manager] 2025-04-05 12:12:48.813164 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:12:48.813582 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:12:48.814891 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:12:48.816648 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:12:48.819677 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:12:48.819993 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:12:48.820014 | orchestrator | 2025-04-05 12:12:48.820032 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-04-05 12:12:48.821159 | orchestrator | Saturday 05 April 2025 12:12:48 +0000 (0:00:00.804) 0:05:57.381 ******** 2025-04-05 12:12:50.037537 | orchestrator | ok: [testbed-manager] 2025-04-05 12:12:50.039405 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:12:50.039441 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:12:50.040812 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:12:50.040881 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:12:50.041490 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:12:50.042236 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:12:50.043181 | orchestrator | 2025-04-05 12:12:50.043470 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-04-05 12:12:50.044256 | orchestrator | Saturday 05 April 2025 12:12:50 +0000 (0:00:01.220) 0:05:58.601 ******** 2025-04-05 12:12:50.167320 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:12:51.380751 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:12:51.380996 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:12:51.381405 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:12:51.382586 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:12:51.383118 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:12:51.383942 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:12:51.384403 | orchestrator | 2025-04-05 12:12:51.385372 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-04-05 12:12:51.385587 | orchestrator | Saturday 05 April 2025 12:12:51 +0000 (0:00:01.344) 0:05:59.946 ******** 2025-04-05 12:12:52.759023 | orchestrator | ok: [testbed-manager] 2025-04-05 12:12:52.759723 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:12:52.760126 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:12:52.763553 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:12:52.764041 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:12:52.765140 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:12:52.765919 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:12:52.766582 | orchestrator | 2025-04-05 12:12:52.767030 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-04-05 12:12:52.767732 | orchestrator | Saturday 05 April 2025 12:12:52 +0000 (0:00:01.379) 0:06:01.325 ******** 2025-04-05 12:12:54.496112 | orchestrator | changed: [testbed-manager] 2025-04-05 12:12:54.496475 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:12:54.498128 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:12:54.498802 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:12:54.498981 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:12:54.500338 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:12:54.500757 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:12:54.501251 | orchestrator | 2025-04-05 12:12:54.501928 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-04-05 12:12:54.502352 | orchestrator | Saturday 05 April 2025 12:12:54 +0000 (0:00:01.735) 0:06:03.060 ******** 2025-04-05 12:12:55.335960 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:12:55.336379 | orchestrator | 2025-04-05 12:12:55.336799 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-04-05 12:12:55.337492 | orchestrator | Saturday 05 April 2025 12:12:55 +0000 (0:00:00.840) 0:06:03.901 ******** 2025-04-05 12:12:56.642744 | orchestrator | ok: [testbed-manager] 2025-04-05 12:12:56.643493 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:12:56.644139 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:12:56.645380 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:12:56.646206 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:12:56.647642 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:12:56.648128 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:12:56.648674 | orchestrator | 2025-04-05 12:12:56.649190 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-04-05 12:12:56.649696 | orchestrator | Saturday 05 April 2025 12:12:56 +0000 (0:00:01.306) 0:06:05.207 ******** 2025-04-05 12:12:57.859656 | orchestrator | ok: [testbed-manager] 2025-04-05 12:12:57.861902 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:12:57.863207 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:12:57.863237 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:12:57.864235 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:12:57.865100 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:12:57.865983 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:12:57.866467 | orchestrator | 2025-04-05 12:12:57.867326 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-04-05 12:12:57.867841 | orchestrator | Saturday 05 April 2025 12:12:57 +0000 (0:00:01.211) 0:06:06.419 ******** 2025-04-05 12:12:58.928100 | orchestrator | ok: [testbed-manager] 2025-04-05 12:12:58.929766 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:12:58.929870 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:12:58.930785 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:12:58.931562 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:12:58.932648 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:12:58.933748 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:12:58.934730 | orchestrator | 2025-04-05 12:12:58.935681 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-04-05 12:12:58.936208 | orchestrator | Saturday 05 April 2025 12:12:58 +0000 (0:00:01.073) 0:06:07.493 ******** 2025-04-05 12:13:00.117718 | orchestrator | ok: [testbed-manager] 2025-04-05 12:13:00.122260 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:13:00.123570 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:13:00.123800 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:13:00.124968 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:13:00.125392 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:13:00.125649 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:13:00.126908 | orchestrator | 2025-04-05 12:13:00.127444 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-04-05 12:13:00.128233 | orchestrator | Saturday 05 April 2025 12:13:00 +0000 (0:00:01.189) 0:06:08.682 ******** 2025-04-05 12:13:01.377324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:13:01.378670 | orchestrator | 2025-04-05 12:13:01.379729 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-05 12:13:01.380702 | orchestrator | Saturday 05 April 2025 12:13:00 +0000 (0:00:00.850) 0:06:09.532 ******** 2025-04-05 12:13:01.382555 | orchestrator | 2025-04-05 12:13:01.383553 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-05 12:13:01.384604 | orchestrator | Saturday 05 April 2025 12:13:00 +0000 (0:00:00.037) 0:06:09.569 ******** 2025-04-05 12:13:01.385288 | orchestrator | 2025-04-05 12:13:01.385670 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-05 12:13:01.386395 | orchestrator | Saturday 05 April 2025 12:13:01 +0000 (0:00:00.037) 0:06:09.606 ******** 2025-04-05 12:13:01.386989 | orchestrator | 2025-04-05 12:13:01.387308 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-05 12:13:01.388180 | orchestrator | Saturday 05 April 2025 12:13:01 +0000 (0:00:00.041) 0:06:09.648 ******** 2025-04-05 12:13:01.388940 | orchestrator | 2025-04-05 12:13:01.389631 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-05 12:13:01.390348 | orchestrator | Saturday 05 April 2025 12:13:01 +0000 (0:00:00.178) 0:06:09.827 ******** 2025-04-05 12:13:01.392942 | orchestrator | 2025-04-05 12:13:01.393435 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-05 12:13:01.393859 | orchestrator | Saturday 05 April 2025 12:13:01 +0000 (0:00:00.036) 0:06:09.863 ******** 2025-04-05 12:13:01.394342 | orchestrator | 2025-04-05 12:13:01.394683 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-05 12:13:01.395047 | orchestrator | Saturday 05 April 2025 12:13:01 +0000 (0:00:00.036) 0:06:09.900 ******** 2025-04-05 12:13:01.395600 | orchestrator | 2025-04-05 12:13:01.396397 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-05 12:13:01.397769 | orchestrator | Saturday 05 April 2025 12:13:01 +0000 (0:00:00.042) 0:06:09.942 ******** 2025-04-05 12:13:02.705296 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:13:02.706801 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:13:02.706884 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:13:02.706900 | orchestrator | 2025-04-05 12:13:02.706916 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-04-05 12:13:02.706940 | orchestrator | Saturday 05 April 2025 12:13:02 +0000 (0:00:01.324) 0:06:11.267 ******** 2025-04-05 12:13:04.129146 | orchestrator | changed: [testbed-manager] 2025-04-05 12:13:04.129331 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:13:04.129515 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:13:04.130096 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:13:04.130720 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:13:04.131119 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:13:04.131514 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:13:04.131981 | orchestrator | 2025-04-05 12:13:04.132404 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-04-05 12:13:04.133799 | orchestrator | Saturday 05 April 2025 12:13:04 +0000 (0:00:01.422) 0:06:12.689 ******** 2025-04-05 12:13:05.298953 | orchestrator | changed: [testbed-manager] 2025-04-05 12:13:05.299128 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:13:05.299160 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:13:05.300047 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:13:05.300132 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:13:05.300498 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:13:05.301112 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:13:05.301686 | orchestrator | 2025-04-05 12:13:05.302090 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-04-05 12:13:05.302370 | orchestrator | Saturday 05 April 2025 12:13:05 +0000 (0:00:01.175) 0:06:13.865 ******** 2025-04-05 12:13:05.424684 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:13:07.093946 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:13:07.094773 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:13:07.094842 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:13:07.094858 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:13:07.094882 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:13:07.094949 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:13:07.095211 | orchestrator | 2025-04-05 12:13:07.095506 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-04-05 12:13:07.098480 | orchestrator | Saturday 05 April 2025 12:13:07 +0000 (0:00:01.793) 0:06:15.658 ******** 2025-04-05 12:13:07.218722 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:13:07.218910 | orchestrator | 2025-04-05 12:13:07.218940 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-04-05 12:13:08.332729 | orchestrator | Saturday 05 April 2025 12:13:07 +0000 (0:00:00.126) 0:06:15.784 ******** 2025-04-05 12:13:08.332868 | orchestrator | ok: [testbed-manager] 2025-04-05 12:13:08.332925 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:13:08.334130 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:13:08.334347 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:13:08.334367 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:13:08.334844 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:13:08.335256 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:13:08.335600 | orchestrator | 2025-04-05 12:13:08.336110 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-04-05 12:13:08.336585 | orchestrator | Saturday 05 April 2025 12:13:08 +0000 (0:00:01.111) 0:06:16.895 ******** 2025-04-05 12:13:08.452623 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:13:08.517366 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:13:08.577239 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:13:08.638875 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:13:08.703408 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:13:08.818861 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:13:08.820719 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:13:08.821131 | orchestrator | 2025-04-05 12:13:08.821583 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-04-05 12:13:08.822002 | orchestrator | Saturday 05 April 2025 12:13:08 +0000 (0:00:00.488) 0:06:17.384 ******** 2025-04-05 12:13:09.654227 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:13:09.657122 | orchestrator | 2025-04-05 12:13:09.658393 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-04-05 12:13:09.659051 | orchestrator | Saturday 05 April 2025 12:13:09 +0000 (0:00:00.834) 0:06:18.218 ******** 2025-04-05 12:13:10.535273 | orchestrator | ok: [testbed-manager] 2025-04-05 12:13:10.537012 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:13:10.537276 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:13:10.539251 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:13:10.540099 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:13:10.541163 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:13:10.542465 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:13:10.543685 | orchestrator | 2025-04-05 12:13:10.544123 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-04-05 12:13:10.544777 | orchestrator | Saturday 05 April 2025 12:13:10 +0000 (0:00:00.882) 0:06:19.100 ******** 2025-04-05 12:13:13.464582 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-04-05 12:13:13.465478 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-04-05 12:13:13.465510 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-04-05 12:13:13.465864 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-04-05 12:13:13.466989 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-04-05 12:13:13.467507 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-04-05 12:13:13.468167 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-04-05 12:13:13.468621 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-04-05 12:13:13.468922 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-04-05 12:13:13.470093 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-04-05 12:13:13.470728 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-04-05 12:13:13.472024 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-04-05 12:13:13.472192 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-04-05 12:13:13.475105 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-04-05 12:13:13.475976 | orchestrator | 2025-04-05 12:13:13.476657 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-04-05 12:13:13.477117 | orchestrator | Saturday 05 April 2025 12:13:13 +0000 (0:00:02.929) 0:06:22.029 ******** 2025-04-05 12:13:13.593757 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:13:13.654131 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:13:13.714525 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:13:13.781744 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:13:13.841057 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:13:13.934187 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:13:13.934329 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:13:13.935256 | orchestrator | 2025-04-05 12:13:13.936281 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-04-05 12:13:13.937125 | orchestrator | Saturday 05 April 2025 12:13:13 +0000 (0:00:00.470) 0:06:22.500 ******** 2025-04-05 12:13:14.907740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:13:14.908770 | orchestrator | 2025-04-05 12:13:14.909406 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-04-05 12:13:14.909432 | orchestrator | Saturday 05 April 2025 12:13:14 +0000 (0:00:00.971) 0:06:23.471 ******** 2025-04-05 12:13:15.316055 | orchestrator | ok: [testbed-manager] 2025-04-05 12:13:15.707700 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:13:15.708685 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:13:15.708746 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:13:15.709755 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:13:15.710504 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:13:15.710775 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:13:15.711500 | orchestrator | 2025-04-05 12:13:15.711975 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-04-05 12:13:15.712694 | orchestrator | Saturday 05 April 2025 12:13:15 +0000 (0:00:00.801) 0:06:24.272 ******** 2025-04-05 12:13:16.120308 | orchestrator | ok: [testbed-manager] 2025-04-05 12:13:16.493459 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:13:16.493581 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:13:16.493600 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:13:16.494515 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:13:16.495923 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:13:16.497164 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:13:16.497506 | orchestrator | 2025-04-05 12:13:16.497529 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-04-05 12:13:16.498106 | orchestrator | Saturday 05 April 2025 12:13:16 +0000 (0:00:00.787) 0:06:25.059 ******** 2025-04-05 12:13:16.617949 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:13:16.681619 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:13:16.740270 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:13:16.807588 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:13:16.870460 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:13:16.963069 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:13:16.963218 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:13:16.966275 | orchestrator | 2025-04-05 12:13:16.966928 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-04-05 12:13:16.967688 | orchestrator | Saturday 05 April 2025 12:13:16 +0000 (0:00:00.469) 0:06:25.528 ******** 2025-04-05 12:13:18.255290 | orchestrator | ok: [testbed-manager] 2025-04-05 12:13:18.255456 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:13:18.257339 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:13:18.259581 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:13:18.261695 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:13:18.262910 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:13:18.263790 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:13:18.264767 | orchestrator | 2025-04-05 12:13:18.265468 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-04-05 12:13:18.265902 | orchestrator | Saturday 05 April 2025 12:13:18 +0000 (0:00:01.291) 0:06:26.820 ******** 2025-04-05 12:13:18.379648 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:13:18.437682 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:13:18.653903 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:13:18.715801 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:13:18.776128 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:13:18.873964 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:13:18.875417 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:13:18.876705 | orchestrator | 2025-04-05 12:13:18.877793 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-04-05 12:13:18.879117 | orchestrator | Saturday 05 April 2025 12:13:18 +0000 (0:00:00.620) 0:06:27.440 ******** 2025-04-05 12:13:26.745975 | orchestrator | ok: [testbed-manager] 2025-04-05 12:13:26.746440 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:13:26.747691 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:13:26.748843 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:13:26.749210 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:13:26.750678 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:13:26.751431 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:13:26.751732 | orchestrator | 2025-04-05 12:13:26.754106 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-04-05 12:13:27.959533 | orchestrator | Saturday 05 April 2025 12:13:26 +0000 (0:00:07.870) 0:06:35.310 ******** 2025-04-05 12:13:27.959649 | orchestrator | ok: [testbed-manager] 2025-04-05 12:13:27.959711 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:13:27.961037 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:13:27.963165 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:13:27.963575 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:13:27.963601 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:13:27.964205 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:13:27.965371 | orchestrator | 2025-04-05 12:13:27.966278 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-04-05 12:13:27.966847 | orchestrator | Saturday 05 April 2025 12:13:27 +0000 (0:00:01.214) 0:06:36.524 ******** 2025-04-05 12:13:29.622928 | orchestrator | ok: [testbed-manager] 2025-04-05 12:13:29.623109 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:13:29.625196 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:13:29.626783 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:13:29.627728 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:13:29.628343 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:13:29.629688 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:13:29.630185 | orchestrator | 2025-04-05 12:13:29.630886 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-04-05 12:13:29.632019 | orchestrator | Saturday 05 April 2025 12:13:29 +0000 (0:00:01.664) 0:06:38.189 ******** 2025-04-05 12:13:31.341967 | orchestrator | ok: [testbed-manager] 2025-04-05 12:13:31.342617 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:13:31.346117 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:13:31.346256 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:13:31.347034 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:13:31.347059 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:13:31.347079 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:13:31.347693 | orchestrator | 2025-04-05 12:13:31.348725 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-05 12:13:31.349402 | orchestrator | Saturday 05 April 2025 12:13:31 +0000 (0:00:01.717) 0:06:39.907 ******** 2025-04-05 12:13:31.747364 | orchestrator | ok: [testbed-manager] 2025-04-05 12:13:32.158569 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:13:32.159031 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:13:32.159063 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:13:32.159466 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:13:32.159982 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:13:32.160414 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:13:32.162161 | orchestrator | 2025-04-05 12:13:32.162358 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-05 12:13:32.162387 | orchestrator | Saturday 05 April 2025 12:13:32 +0000 (0:00:00.816) 0:06:40.723 ******** 2025-04-05 12:13:32.286251 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:13:32.352930 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:13:32.416670 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:13:32.479796 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:13:32.538862 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:13:32.905580 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:13:32.905736 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:13:32.905940 | orchestrator | 2025-04-05 12:13:32.905970 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-04-05 12:13:32.907077 | orchestrator | Saturday 05 April 2025 12:13:32 +0000 (0:00:00.750) 0:06:41.473 ******** 2025-04-05 12:13:33.043940 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:13:33.108316 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:13:33.169990 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:13:33.236137 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:13:33.297326 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:13:33.391638 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:13:33.392345 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:13:33.393124 | orchestrator | 2025-04-05 12:13:33.394000 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-04-05 12:13:33.394910 | orchestrator | Saturday 05 April 2025 12:13:33 +0000 (0:00:00.483) 0:06:41.957 ******** 2025-04-05 12:13:33.673550 | orchestrator | ok: [testbed-manager] 2025-04-05 12:13:33.729609 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:13:33.797401 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:13:33.859058 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:13:33.920963 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:13:34.030423 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:13:34.031325 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:13:34.032210 | orchestrator | 2025-04-05 12:13:34.033046 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-04-05 12:13:34.034120 | orchestrator | Saturday 05 April 2025 12:13:34 +0000 (0:00:00.641) 0:06:42.598 ******** 2025-04-05 12:13:34.155280 | orchestrator | ok: [testbed-manager] 2025-04-05 12:13:34.222179 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:13:34.284469 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:13:34.345686 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:13:34.419407 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:13:34.516609 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:13:34.518209 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:13:34.518344 | orchestrator | 2025-04-05 12:13:34.519190 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-04-05 12:13:34.521332 | orchestrator | Saturday 05 April 2025 12:13:34 +0000 (0:00:00.484) 0:06:43.082 ******** 2025-04-05 12:13:34.669538 | orchestrator | ok: [testbed-manager] 2025-04-05 12:13:34.735587 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:13:34.807089 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:13:34.873288 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:13:34.934488 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:13:35.033004 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:13:35.034428 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:13:35.034912 | orchestrator | 2025-04-05 12:13:35.034942 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-04-05 12:13:35.035658 | orchestrator | Saturday 05 April 2025 12:13:35 +0000 (0:00:00.516) 0:06:43.598 ******** 2025-04-05 12:13:38.874864 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:13:38.875121 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:13:38.875625 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:13:38.877129 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:13:38.877408 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:13:38.878090 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:13:38.878996 | orchestrator | ok: [testbed-manager] 2025-04-05 12:13:38.879395 | orchestrator | 2025-04-05 12:13:38.880272 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-04-05 12:13:38.880968 | orchestrator | Saturday 05 April 2025 12:13:38 +0000 (0:00:03.843) 0:06:47.442 ******** 2025-04-05 12:13:39.002120 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:13:39.229646 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:13:39.295085 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:13:39.356125 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:13:39.424240 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:13:39.538973 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:13:39.539971 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:13:39.543488 | orchestrator | 2025-04-05 12:13:40.302431 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-04-05 12:13:40.302549 | orchestrator | Saturday 05 April 2025 12:13:39 +0000 (0:00:00.662) 0:06:48.105 ******** 2025-04-05 12:13:40.302584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:13:40.303640 | orchestrator | 2025-04-05 12:13:40.303673 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-04-05 12:13:40.304921 | orchestrator | Saturday 05 April 2025 12:13:40 +0000 (0:00:00.761) 0:06:48.867 ******** 2025-04-05 12:13:41.999393 | orchestrator | ok: [testbed-manager] 2025-04-05 12:13:42.000442 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:13:42.001349 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:13:42.002150 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:13:42.002374 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:13:42.003160 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:13:42.003546 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:13:42.004165 | orchestrator | 2025-04-05 12:13:42.004856 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-04-05 12:13:42.005333 | orchestrator | Saturday 05 April 2025 12:13:41 +0000 (0:00:01.696) 0:06:50.564 ******** 2025-04-05 12:13:43.088597 | orchestrator | ok: [testbed-manager] 2025-04-05 12:13:43.090129 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:13:43.090572 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:13:43.091983 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:13:43.092772 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:13:43.093428 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:13:43.094111 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:13:43.094796 | orchestrator | 2025-04-05 12:13:43.095361 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-04-05 12:13:43.095884 | orchestrator | Saturday 05 April 2025 12:13:43 +0000 (0:00:01.088) 0:06:51.652 ******** 2025-04-05 12:13:43.476085 | orchestrator | ok: [testbed-manager] 2025-04-05 12:13:44.087278 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:13:44.087880 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:13:44.088841 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:13:44.090218 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:13:44.090507 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:13:44.091338 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:13:44.092082 | orchestrator | 2025-04-05 12:13:44.092594 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-04-05 12:13:44.093463 | orchestrator | Saturday 05 April 2025 12:13:44 +0000 (0:00:01.002) 0:06:52.654 ******** 2025-04-05 12:13:45.582927 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-05 12:13:45.584135 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-05 12:13:45.587500 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-05 12:13:45.588900 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-05 12:13:45.588925 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-05 12:13:45.588945 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-05 12:13:45.589000 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-05 12:13:45.589476 | orchestrator | 2025-04-05 12:13:45.590377 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-04-05 12:13:45.592366 | orchestrator | Saturday 05 April 2025 12:13:45 +0000 (0:00:01.492) 0:06:54.147 ******** 2025-04-05 12:13:46.463944 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:13:46.465305 | orchestrator | 2025-04-05 12:13:46.466714 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-04-05 12:13:46.467631 | orchestrator | Saturday 05 April 2025 12:13:46 +0000 (0:00:00.883) 0:06:55.031 ******** 2025-04-05 12:13:54.323205 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:13:54.324656 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:13:54.324702 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:13:54.326320 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:13:54.326484 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:13:54.327185 | orchestrator | changed: [testbed-manager] 2025-04-05 12:13:54.328232 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:13:54.329157 | orchestrator | 2025-04-05 12:13:54.329506 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-04-05 12:13:54.330399 | orchestrator | Saturday 05 April 2025 12:13:54 +0000 (0:00:07.855) 0:07:02.887 ******** 2025-04-05 12:13:55.991706 | orchestrator | ok: [testbed-manager] 2025-04-05 12:13:55.991926 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:13:55.992535 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:13:55.993402 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:13:55.994348 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:13:55.995059 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:13:55.995766 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:13:55.996137 | orchestrator | 2025-04-05 12:13:55.998170 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-04-05 12:13:57.400117 | orchestrator | Saturday 05 April 2025 12:13:55 +0000 (0:00:01.671) 0:07:04.558 ******** 2025-04-05 12:13:57.400241 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:13:57.400380 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:13:57.403718 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:13:57.404681 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:13:57.405517 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:13:57.406224 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:13:57.406842 | orchestrator | 2025-04-05 12:13:57.407597 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-04-05 12:13:57.408331 | orchestrator | Saturday 05 April 2025 12:13:57 +0000 (0:00:01.406) 0:07:05.965 ******** 2025-04-05 12:13:58.741092 | orchestrator | changed: [testbed-manager] 2025-04-05 12:13:58.741792 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:13:58.742372 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:13:58.743502 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:13:58.745181 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:13:58.745635 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:13:58.745660 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:13:58.746208 | orchestrator | 2025-04-05 12:13:58.746702 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-04-05 12:13:58.747262 | orchestrator | 2025-04-05 12:13:58.747718 | orchestrator | TASK [Include hardening role] ************************************************** 2025-04-05 12:13:58.748463 | orchestrator | Saturday 05 April 2025 12:13:58 +0000 (0:00:01.344) 0:07:07.309 ******** 2025-04-05 12:13:58.865358 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:13:58.922880 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:13:58.980234 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:13:59.045147 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:13:59.102441 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:13:59.203748 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:13:59.204291 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:13:59.205961 | orchestrator | 2025-04-05 12:13:59.206260 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-04-05 12:13:59.206559 | orchestrator | 2025-04-05 12:13:59.206588 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-04-05 12:13:59.207044 | orchestrator | Saturday 05 April 2025 12:13:59 +0000 (0:00:00.463) 0:07:07.772 ******** 2025-04-05 12:14:00.583987 | orchestrator | changed: [testbed-manager] 2025-04-05 12:14:00.585180 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:14:00.585355 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:14:00.586742 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:14:00.588079 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:14:00.588512 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:14:00.589281 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:14:00.590081 | orchestrator | 2025-04-05 12:14:00.591148 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-04-05 12:14:00.592097 | orchestrator | Saturday 05 April 2025 12:14:00 +0000 (0:00:01.378) 0:07:09.150 ******** 2025-04-05 12:14:02.246576 | orchestrator | ok: [testbed-manager] 2025-04-05 12:14:02.247276 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:14:02.247697 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:14:02.248852 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:14:02.251277 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:14:02.251806 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:14:02.251853 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:14:02.251888 | orchestrator | 2025-04-05 12:14:02.251908 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-04-05 12:14:02.252492 | orchestrator | Saturday 05 April 2025 12:14:02 +0000 (0:00:01.661) 0:07:10.812 ******** 2025-04-05 12:14:02.383749 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:14:02.442722 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:14:02.507536 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:14:02.566680 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:14:02.623420 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:14:03.010413 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:14:03.010613 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:14:03.011414 | orchestrator | 2025-04-05 12:14:03.012090 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-04-05 12:14:03.012795 | orchestrator | Saturday 05 April 2025 12:14:03 +0000 (0:00:00.764) 0:07:11.577 ******** 2025-04-05 12:14:04.382765 | orchestrator | changed: [testbed-manager] 2025-04-05 12:14:04.382980 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:14:04.384522 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:14:04.385317 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:14:04.388587 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:14:04.389460 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:14:04.391103 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:14:04.392467 | orchestrator | 2025-04-05 12:14:04.394468 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-04-05 12:14:05.291859 | orchestrator | 2025-04-05 12:14:05.291985 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-04-05 12:14:05.292007 | orchestrator | Saturday 05 April 2025 12:14:04 +0000 (0:00:01.372) 0:07:12.949 ******** 2025-04-05 12:14:05.292039 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:14:05.292114 | orchestrator | 2025-04-05 12:14:05.292466 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-04-05 12:14:05.293987 | orchestrator | Saturday 05 April 2025 12:14:05 +0000 (0:00:00.906) 0:07:13.856 ******** 2025-04-05 12:14:05.677783 | orchestrator | ok: [testbed-manager] 2025-04-05 12:14:06.148781 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:14:06.149670 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:14:06.150368 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:14:06.151227 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:14:06.152009 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:14:06.152710 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:14:06.152983 | orchestrator | 2025-04-05 12:14:06.153768 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-04-05 12:14:06.154396 | orchestrator | Saturday 05 April 2025 12:14:06 +0000 (0:00:00.858) 0:07:14.715 ******** 2025-04-05 12:14:07.293021 | orchestrator | changed: [testbed-manager] 2025-04-05 12:14:07.294355 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:14:07.294751 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:14:07.297431 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:14:07.298319 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:14:07.299381 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:14:07.299875 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:14:07.300481 | orchestrator | 2025-04-05 12:14:07.301613 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-04-05 12:14:07.302102 | orchestrator | Saturday 05 April 2025 12:14:07 +0000 (0:00:01.142) 0:07:15.857 ******** 2025-04-05 12:14:08.227978 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:14:08.228895 | orchestrator | 2025-04-05 12:14:08.230557 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-04-05 12:14:08.231094 | orchestrator | Saturday 05 April 2025 12:14:08 +0000 (0:00:00.934) 0:07:16.792 ******** 2025-04-05 12:14:08.617277 | orchestrator | ok: [testbed-manager] 2025-04-05 12:14:09.020937 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:14:09.022792 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:14:09.023509 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:14:09.023538 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:14:09.024312 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:14:09.024830 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:14:09.025837 | orchestrator | 2025-04-05 12:14:09.026592 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-04-05 12:14:09.027261 | orchestrator | Saturday 05 April 2025 12:14:09 +0000 (0:00:00.793) 0:07:17.585 ******** 2025-04-05 12:14:09.402861 | orchestrator | changed: [testbed-manager] 2025-04-05 12:14:10.037402 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:14:10.037955 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:14:10.039288 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:14:10.040124 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:14:10.040934 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:14:10.042917 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:14:10.043947 | orchestrator | 2025-04-05 12:14:10.045158 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:14:10.046568 | orchestrator | 2025-04-05 12:14:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:14:10.046748 | orchestrator | 2025-04-05 12:14:10 | INFO  | Please wait and do not abort execution. 2025-04-05 12:14:10.047973 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-04-05 12:14:10.049324 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-04-05 12:14:10.050010 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-05 12:14:10.052838 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-05 12:14:10.053693 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-05 12:14:10.054422 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-05 12:14:10.055246 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-05 12:14:10.055916 | orchestrator | 2025-04-05 12:14:10.056621 | orchestrator | 2025-04-05 12:14:10.057590 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:14:10.058135 | orchestrator | Saturday 05 April 2025 12:14:10 +0000 (0:00:01.017) 0:07:18.602 ******** 2025-04-05 12:14:10.058509 | orchestrator | =============================================================================== 2025-04-05 12:14:10.059344 | orchestrator | osism.commons.packages : Install required packages --------------------- 53.88s 2025-04-05 12:14:10.060254 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.83s 2025-04-05 12:14:10.060674 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 29.84s 2025-04-05 12:14:10.061480 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.87s 2025-04-05 12:14:10.062107 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.42s 2025-04-05 12:14:10.062912 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.29s 2025-04-05 12:14:10.064373 | orchestrator | osism.services.docker : Install docker package -------------------------- 9.94s 2025-04-05 12:14:10.065306 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.17s 2025-04-05 12:14:10.066258 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.68s 2025-04-05 12:14:10.067212 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.45s 2025-04-05 12:14:10.068271 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.43s 2025-04-05 12:14:10.068523 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.38s 2025-04-05 12:14:10.069305 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.26s 2025-04-05 12:14:10.070061 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.05s 2025-04-05 12:14:10.070343 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.87s 2025-04-05 12:14:10.071139 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 7.86s 2025-04-05 12:14:10.071603 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.99s 2025-04-05 12:14:10.072250 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.64s 2025-04-05 12:14:10.073516 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.21s 2025-04-05 12:14:10.073754 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 4.85s 2025-04-05 12:14:10.650253 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-04-05 12:14:12.565201 | orchestrator | + osism apply network 2025-04-05 12:14:12.565308 | orchestrator | 2025-04-05 12:14:12 | INFO  | Task 4f508e0e-8bdb-4faf-a51b-fb5bc40519b2 (network) was prepared for execution. 2025-04-05 12:14:16.521281 | orchestrator | 2025-04-05 12:14:12 | INFO  | It takes a moment until task 4f508e0e-8bdb-4faf-a51b-fb5bc40519b2 (network) has been started and output is visible here. 2025-04-05 12:14:16.521434 | orchestrator | 2025-04-05 12:14:16.522399 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-04-05 12:14:16.522435 | orchestrator | 2025-04-05 12:14:16.524049 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-04-05 12:14:16.524466 | orchestrator | Saturday 05 April 2025 12:14:16 +0000 (0:00:00.257) 0:00:00.257 ******** 2025-04-05 12:14:16.665006 | orchestrator | ok: [testbed-manager] 2025-04-05 12:14:16.752851 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:14:16.828139 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:14:16.902384 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:14:17.067423 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:14:17.188781 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:14:17.189935 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:14:17.190960 | orchestrator | 2025-04-05 12:14:17.192173 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-04-05 12:14:17.192766 | orchestrator | Saturday 05 April 2025 12:14:17 +0000 (0:00:00.668) 0:00:00.925 ******** 2025-04-05 12:14:18.298997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:14:18.299171 | orchestrator | 2025-04-05 12:14:18.300244 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-04-05 12:14:18.300871 | orchestrator | Saturday 05 April 2025 12:14:18 +0000 (0:00:01.108) 0:00:02.034 ******** 2025-04-05 12:14:20.743848 | orchestrator | ok: [testbed-manager] 2025-04-05 12:14:20.744401 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:14:20.745525 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:14:20.747391 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:14:20.748706 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:14:20.749759 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:14:20.750984 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:14:20.752102 | orchestrator | 2025-04-05 12:14:20.752611 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-04-05 12:14:20.753693 | orchestrator | Saturday 05 April 2025 12:14:20 +0000 (0:00:02.446) 0:00:04.480 ******** 2025-04-05 12:14:22.314154 | orchestrator | ok: [testbed-manager] 2025-04-05 12:14:22.317734 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:14:22.317921 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:14:22.317950 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:14:22.317969 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:14:22.318321 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:14:22.319246 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:14:22.319961 | orchestrator | 2025-04-05 12:14:22.321267 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-04-05 12:14:22.322243 | orchestrator | Saturday 05 April 2025 12:14:22 +0000 (0:00:01.567) 0:00:06.047 ******** 2025-04-05 12:14:22.821224 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-04-05 12:14:23.294165 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-04-05 12:14:23.294245 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-04-05 12:14:23.294272 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-04-05 12:14:23.294970 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-04-05 12:14:23.297909 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-04-05 12:14:23.297972 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-04-05 12:14:23.297993 | orchestrator | 2025-04-05 12:14:23.299310 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-04-05 12:14:23.300134 | orchestrator | Saturday 05 April 2025 12:14:23 +0000 (0:00:00.985) 0:00:07.033 ******** 2025-04-05 12:14:26.528128 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-05 12:14:26.531045 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-05 12:14:26.531442 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-05 12:14:26.531469 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-05 12:14:26.531488 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-05 12:14:26.532114 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-05 12:14:26.532532 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-05 12:14:26.532962 | orchestrator | 2025-04-05 12:14:26.533372 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-04-05 12:14:26.533841 | orchestrator | Saturday 05 April 2025 12:14:26 +0000 (0:00:03.233) 0:00:10.266 ******** 2025-04-05 12:14:28.133939 | orchestrator | changed: [testbed-manager] 2025-04-05 12:14:28.138122 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:14:28.138751 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:14:28.138774 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:14:28.138789 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:14:28.138803 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:14:28.138846 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:14:28.139508 | orchestrator | 2025-04-05 12:14:28.140250 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-04-05 12:14:28.140982 | orchestrator | Saturday 05 April 2025 12:14:28 +0000 (0:00:01.603) 0:00:11.870 ******** 2025-04-05 12:14:29.690515 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-05 12:14:29.692900 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-05 12:14:29.692962 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-05 12:14:29.693524 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-05 12:14:29.694463 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-05 12:14:29.695308 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-05 12:14:29.698160 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-05 12:14:30.056684 | orchestrator | 2025-04-05 12:14:30.056770 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-04-05 12:14:30.056789 | orchestrator | Saturday 05 April 2025 12:14:29 +0000 (0:00:01.558) 0:00:13.429 ******** 2025-04-05 12:14:30.056861 | orchestrator | ok: [testbed-manager] 2025-04-05 12:14:30.638131 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:14:30.639466 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:14:30.639863 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:14:30.642958 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:14:30.643851 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:14:30.643876 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:14:30.643896 | orchestrator | 2025-04-05 12:14:30.644169 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-04-05 12:14:30.644876 | orchestrator | Saturday 05 April 2025 12:14:30 +0000 (0:00:00.943) 0:00:14.372 ******** 2025-04-05 12:14:30.785243 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:14:30.852650 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:14:30.924202 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:14:30.993476 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:14:31.059529 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:14:31.173254 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:14:31.174502 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:14:31.175873 | orchestrator | 2025-04-05 12:14:31.176681 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-04-05 12:14:31.177351 | orchestrator | Saturday 05 April 2025 12:14:31 +0000 (0:00:00.539) 0:00:14.912 ******** 2025-04-05 12:14:33.449476 | orchestrator | ok: [testbed-manager] 2025-04-05 12:14:33.449646 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:14:33.449670 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:14:33.449685 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:14:33.449725 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:14:33.451147 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:14:33.451752 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:14:33.452648 | orchestrator | 2025-04-05 12:14:33.453653 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-04-05 12:14:33.456694 | orchestrator | Saturday 05 April 2025 12:14:33 +0000 (0:00:02.266) 0:00:17.179 ******** 2025-04-05 12:14:33.694387 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:14:33.774642 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:14:33.858495 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:14:33.941620 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:14:34.232445 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:14:34.233746 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:14:34.237263 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-04-05 12:14:36.131800 | orchestrator | 2025-04-05 12:14:36.131935 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-04-05 12:14:36.131955 | orchestrator | Saturday 05 April 2025 12:14:34 +0000 (0:00:00.792) 0:00:17.971 ******** 2025-04-05 12:14:36.131984 | orchestrator | ok: [testbed-manager] 2025-04-05 12:14:36.132051 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:14:36.132072 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:14:36.132366 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:14:36.133530 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:14:36.134146 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:14:36.134171 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:14:36.138495 | orchestrator | 2025-04-05 12:14:36.138743 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-04-05 12:14:36.139512 | orchestrator | Saturday 05 April 2025 12:14:36 +0000 (0:00:01.893) 0:00:19.864 ******** 2025-04-05 12:14:37.347350 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:14:37.348616 | orchestrator | 2025-04-05 12:14:37.349911 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-04-05 12:14:37.353527 | orchestrator | Saturday 05 April 2025 12:14:37 +0000 (0:00:01.218) 0:00:21.082 ******** 2025-04-05 12:14:38.064987 | orchestrator | ok: [testbed-manager] 2025-04-05 12:14:38.484193 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:14:38.484610 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:14:38.485534 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:14:38.486259 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:14:38.487102 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:14:38.487848 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:14:38.488546 | orchestrator | 2025-04-05 12:14:38.489299 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-04-05 12:14:38.489591 | orchestrator | Saturday 05 April 2025 12:14:38 +0000 (0:00:01.136) 0:00:22.219 ******** 2025-04-05 12:14:38.659889 | orchestrator | ok: [testbed-manager] 2025-04-05 12:14:38.736668 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:14:38.830726 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:14:38.912325 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:14:38.989025 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:14:39.129536 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:14:39.130093 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:14:39.130704 | orchestrator | 2025-04-05 12:14:39.131594 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-04-05 12:14:39.132418 | orchestrator | Saturday 05 April 2025 12:14:39 +0000 (0:00:00.650) 0:00:22.869 ******** 2025-04-05 12:14:39.464564 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-05 12:14:39.467216 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-04-05 12:14:39.743204 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-05 12:14:39.744676 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-04-05 12:14:39.745382 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-05 12:14:39.746300 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-04-05 12:14:39.841234 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-05 12:14:39.842541 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-04-05 12:14:40.352874 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-05 12:14:40.353946 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-04-05 12:14:40.354544 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-05 12:14:40.354573 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-04-05 12:14:40.354726 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-05 12:14:40.355224 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-04-05 12:14:40.355608 | orchestrator | 2025-04-05 12:14:40.356514 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-04-05 12:14:40.357325 | orchestrator | Saturday 05 April 2025 12:14:40 +0000 (0:00:01.215) 0:00:24.085 ******** 2025-04-05 12:14:40.513405 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:14:40.590893 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:14:40.666616 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:14:40.757862 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:14:40.829553 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:14:40.956341 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:14:40.957560 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:14:40.959741 | orchestrator | 2025-04-05 12:14:40.960428 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-04-05 12:14:40.961149 | orchestrator | Saturday 05 April 2025 12:14:40 +0000 (0:00:00.609) 0:00:24.695 ******** 2025-04-05 12:14:44.353594 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-1, testbed-node-2, testbed-node-5, testbed-node-3, testbed-node-4 2025-04-05 12:14:44.353788 | orchestrator | 2025-04-05 12:14:44.354898 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-04-05 12:14:44.357995 | orchestrator | Saturday 05 April 2025 12:14:44 +0000 (0:00:03.393) 0:00:28.088 ******** 2025-04-05 12:14:48.875258 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-04-05 12:14:48.875698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-04-05 12:14:48.875729 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-04-05 12:14:48.875749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-04-05 12:14:48.876908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-04-05 12:14:48.877525 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-04-05 12:14:48.878614 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-04-05 12:14:48.879089 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-04-05 12:14:48.879576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-04-05 12:14:48.880097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-04-05 12:14:48.880599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-04-05 12:14:48.881378 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-04-05 12:14:48.881701 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-04-05 12:14:48.882372 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-04-05 12:14:48.882839 | orchestrator | 2025-04-05 12:14:48.883468 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-04-05 12:14:48.883954 | orchestrator | Saturday 05 April 2025 12:14:48 +0000 (0:00:04.521) 0:00:32.609 ******** 2025-04-05 12:14:53.384351 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-04-05 12:14:53.384941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-04-05 12:14:53.385670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-04-05 12:14:53.386769 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-04-05 12:14:53.387823 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-04-05 12:14:53.388324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-04-05 12:14:53.389847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-04-05 12:14:53.390502 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-04-05 12:14:53.391023 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-04-05 12:14:53.391667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-04-05 12:14:53.392318 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-04-05 12:14:53.392736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-04-05 12:14:53.393256 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-04-05 12:14:53.393783 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-04-05 12:14:53.394366 | orchestrator | 2025-04-05 12:14:53.395155 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-04-05 12:14:53.395840 | orchestrator | Saturday 05 April 2025 12:14:53 +0000 (0:00:04.510) 0:00:37.120 ******** 2025-04-05 12:14:54.522698 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:14:54.522872 | orchestrator | 2025-04-05 12:14:54.525938 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-04-05 12:14:54.933467 | orchestrator | Saturday 05 April 2025 12:14:54 +0000 (0:00:01.138) 0:00:38.259 ******** 2025-04-05 12:14:54.933593 | orchestrator | ok: [testbed-manager] 2025-04-05 12:14:54.998694 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:14:55.398282 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:14:55.399098 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:14:55.401899 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:14:55.402517 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:14:55.402543 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:14:55.402562 | orchestrator | 2025-04-05 12:14:55.403093 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-04-05 12:14:55.403647 | orchestrator | Saturday 05 April 2025 12:14:55 +0000 (0:00:00.876) 0:00:39.136 ******** 2025-04-05 12:14:55.483227 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-04-05 12:14:55.485988 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-04-05 12:14:55.564434 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-04-05 12:14:55.564484 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-04-05 12:14:55.564507 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:14:55.565354 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-04-05 12:14:55.566750 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-04-05 12:14:55.567447 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-04-05 12:14:55.569134 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-04-05 12:14:55.643051 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-04-05 12:14:55.644111 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-04-05 12:14:55.645868 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-04-05 12:14:55.645966 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-04-05 12:14:55.842775 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:14:55.842974 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-04-05 12:14:55.843826 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-04-05 12:14:55.844791 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-04-05 12:14:55.845209 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-04-05 12:14:55.926169 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:14:55.927064 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-04-05 12:14:55.929900 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-04-05 12:14:56.019031 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-04-05 12:14:56.019114 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-04-05 12:14:56.019143 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:14:56.019616 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-04-05 12:14:56.022974 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-04-05 12:14:57.233356 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-04-05 12:14:57.233466 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-04-05 12:14:57.233494 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:14:57.233555 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:14:57.234269 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-04-05 12:14:57.234863 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-04-05 12:14:57.235707 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-04-05 12:14:57.239165 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-04-05 12:14:57.387130 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:14:57.387177 | orchestrator | 2025-04-05 12:14:57.387189 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-04-05 12:14:57.387204 | orchestrator | Saturday 05 April 2025 12:14:57 +0000 (0:00:01.832) 0:00:40.968 ******** 2025-04-05 12:14:57.387224 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:14:57.479255 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:14:57.568935 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:14:57.652081 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:14:57.733330 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:14:58 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:14:58.000196 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:14:58.001093 | orchestrator | 2025-04-05 12:14:58.001854 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-04-05 12:14:58.002271 | orchestrator | Saturday 05 April 2025 12:14:57 +0000 (0:00:00.768) 0:00:41.737 ******** 2025-04-05 12:14:58.163989 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:14:58.240226 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:14:58.320644 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:14:58.400050 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:14:58.478522 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:14:58.505476 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:14:58.505581 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:14:58.505998 | orchestrator | 2025-04-05 12:14:58.506940 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:14:58.507202 | orchestrator | 2025-04-05 12:14:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:14:58.507593 | orchestrator | 2025-04-05 12:14:58 | INFO  | Please wait and do not abort execution. 2025-04-05 12:14:58.508609 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-05 12:14:58.509460 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-05 12:14:58.511293 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-05 12:14:58.511641 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-05 12:14:58.512093 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-05 12:14:58.512495 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-05 12:14:58.512999 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-05 12:14:58.513564 | orchestrator | 2025-04-05 12:14:58.513741 | orchestrator | 2025-04-05 12:14:58.514312 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:14:58.514744 | orchestrator | Saturday 05 April 2025 12:14:58 +0000 (0:00:00.508) 0:00:42.245 ******** 2025-04-05 12:14:58.515179 | orchestrator | =============================================================================== 2025-04-05 12:14:58.515575 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.52s 2025-04-05 12:14:58.516024 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.51s 2025-04-05 12:14:58.516412 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.39s 2025-04-05 12:14:58.516794 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.23s 2025-04-05 12:14:58.517690 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.45s 2025-04-05 12:14:58.518651 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.27s 2025-04-05 12:14:58.519325 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.89s 2025-04-05 12:14:58.519823 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.83s 2025-04-05 12:14:58.520579 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.60s 2025-04-05 12:14:58.520781 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.57s 2025-04-05 12:14:58.521220 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.56s 2025-04-05 12:14:58.521779 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.22s 2025-04-05 12:14:58.522501 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.22s 2025-04-05 12:14:58.523268 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.14s 2025-04-05 12:14:58.523462 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.14s 2025-04-05 12:14:58.523835 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.11s 2025-04-05 12:14:58.524046 | orchestrator | osism.commons.network : Create required directories --------------------- 0.99s 2025-04-05 12:14:58.524920 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 0.94s 2025-04-05 12:14:58.525226 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.88s 2025-04-05 12:14:58.525566 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.79s 2025-04-05 12:14:59.023242 | orchestrator | + osism apply wireguard 2025-04-05 12:15:00.634350 | orchestrator | 2025-04-05 12:15:00 | INFO  | Task 50d3388b-b8d1-4430-9669-93a7f0b7eccd (wireguard) was prepared for execution. 2025-04-05 12:15:04.418566 | orchestrator | 2025-04-05 12:15:00 | INFO  | It takes a moment until task 50d3388b-b8d1-4430-9669-93a7f0b7eccd (wireguard) has been started and output is visible here. 2025-04-05 12:15:04.418755 | orchestrator | 2025-04-05 12:15:04.420011 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-04-05 12:15:04.420759 | orchestrator | 2025-04-05 12:15:04.422538 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-04-05 12:15:04.423331 | orchestrator | Saturday 05 April 2025 12:15:04 +0000 (0:00:00.215) 0:00:00.215 ******** 2025-04-05 12:15:05.825992 | orchestrator | ok: [testbed-manager] 2025-04-05 12:15:05.826873 | orchestrator | 2025-04-05 12:15:05.827428 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-04-05 12:15:05.828215 | orchestrator | Saturday 05 April 2025 12:15:05 +0000 (0:00:01.412) 0:00:01.627 ******** 2025-04-05 12:15:11.687073 | orchestrator | changed: [testbed-manager] 2025-04-05 12:15:11.687242 | orchestrator | 2025-04-05 12:15:11.687784 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-04-05 12:15:11.688724 | orchestrator | Saturday 05 April 2025 12:15:11 +0000 (0:00:05.860) 0:00:07.488 ******** 2025-04-05 12:15:12.234283 | orchestrator | changed: [testbed-manager] 2025-04-05 12:15:12.234908 | orchestrator | 2025-04-05 12:15:12.235887 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-04-05 12:15:12.236555 | orchestrator | Saturday 05 April 2025 12:15:12 +0000 (0:00:00.548) 0:00:08.037 ******** 2025-04-05 12:15:12.650758 | orchestrator | changed: [testbed-manager] 2025-04-05 12:15:12.651602 | orchestrator | 2025-04-05 12:15:12.652304 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-04-05 12:15:12.652534 | orchestrator | Saturday 05 April 2025 12:15:12 +0000 (0:00:00.416) 0:00:08.453 ******** 2025-04-05 12:15:13.172891 | orchestrator | ok: [testbed-manager] 2025-04-05 12:15:13.174468 | orchestrator | 2025-04-05 12:15:13.174737 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-04-05 12:15:13.174765 | orchestrator | Saturday 05 April 2025 12:15:13 +0000 (0:00:00.521) 0:00:08.975 ******** 2025-04-05 12:15:13.552263 | orchestrator | ok: [testbed-manager] 2025-04-05 12:15:13.552914 | orchestrator | 2025-04-05 12:15:13.553252 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-04-05 12:15:13.553929 | orchestrator | Saturday 05 April 2025 12:15:13 +0000 (0:00:00.380) 0:00:09.355 ******** 2025-04-05 12:15:13.946600 | orchestrator | ok: [testbed-manager] 2025-04-05 12:15:13.947088 | orchestrator | 2025-04-05 12:15:13.947716 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-04-05 12:15:13.948604 | orchestrator | Saturday 05 April 2025 12:15:13 +0000 (0:00:00.392) 0:00:09.748 ******** 2025-04-05 12:15:15.038232 | orchestrator | changed: [testbed-manager] 2025-04-05 12:15:15.038692 | orchestrator | 2025-04-05 12:15:15.039061 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-04-05 12:15:15.040641 | orchestrator | Saturday 05 April 2025 12:15:15 +0000 (0:00:01.091) 0:00:10.839 ******** 2025-04-05 12:15:15.921542 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-05 12:15:15.922481 | orchestrator | changed: [testbed-manager] 2025-04-05 12:15:15.923254 | orchestrator | 2025-04-05 12:15:15.924093 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-04-05 12:15:15.924543 | orchestrator | Saturday 05 April 2025 12:15:15 +0000 (0:00:00.884) 0:00:11.724 ******** 2025-04-05 12:15:17.554684 | orchestrator | changed: [testbed-manager] 2025-04-05 12:15:17.556109 | orchestrator | 2025-04-05 12:15:17.556581 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-04-05 12:15:17.556607 | orchestrator | Saturday 05 April 2025 12:15:17 +0000 (0:00:01.631) 0:00:13.355 ******** 2025-04-05 12:15:18.428211 | orchestrator | changed: [testbed-manager] 2025-04-05 12:15:18.428324 | orchestrator | 2025-04-05 12:15:18.428343 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:15:18.428363 | orchestrator | 2025-04-05 12:15:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:15:18.428426 | orchestrator | 2025-04-05 12:15:18 | INFO  | Please wait and do not abort execution. 2025-04-05 12:15:18.428482 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:15:18.428788 | orchestrator | 2025-04-05 12:15:18.429390 | orchestrator | 2025-04-05 12:15:18.429779 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:15:18.430100 | orchestrator | Saturday 05 April 2025 12:15:18 +0000 (0:00:00.873) 0:00:14.228 ******** 2025-04-05 12:15:18.430466 | orchestrator | =============================================================================== 2025-04-05 12:15:18.430730 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.86s 2025-04-05 12:15:18.431266 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.63s 2025-04-05 12:15:18.431594 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.41s 2025-04-05 12:15:18.431962 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.09s 2025-04-05 12:15:18.432264 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.88s 2025-04-05 12:15:18.432730 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.87s 2025-04-05 12:15:18.433007 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2025-04-05 12:15:18.433314 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-04-05 12:15:18.433601 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2025-04-05 12:15:18.434004 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.39s 2025-04-05 12:15:18.434307 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.38s 2025-04-05 12:15:18.789221 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-04-05 12:15:18.823057 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-04-05 12:15:18.823140 | orchestrator | Dload Upload Total Spent Left Speed 2025-04-05 12:15:18.893327 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 197 0 --:--:-- --:--:-- --:--:-- 200 2025-04-05 12:15:18.905540 | orchestrator | + osism apply --environment custom workarounds 2025-04-05 12:15:20.430395 | orchestrator | 2025-04-05 12:15:20 | INFO  | Trying to run play workarounds in environment custom 2025-04-05 12:15:20.487705 | orchestrator | 2025-04-05 12:15:20 | INFO  | Task ea98f5cc-ebe2-48b2-989d-9719defd805b (workarounds) was prepared for execution. 2025-04-05 12:15:24.148768 | orchestrator | 2025-04-05 12:15:20 | INFO  | It takes a moment until task ea98f5cc-ebe2-48b2-989d-9719defd805b (workarounds) has been started and output is visible here. 2025-04-05 12:15:24.148983 | orchestrator | 2025-04-05 12:15:24.151046 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:15:24.151728 | orchestrator | 2025-04-05 12:15:24.151777 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-04-05 12:15:24.152420 | orchestrator | Saturday 05 April 2025 12:15:24 +0000 (0:00:00.110) 0:00:00.110 ******** 2025-04-05 12:15:24.306106 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-04-05 12:15:24.378211 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-04-05 12:15:24.450951 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-04-05 12:15:24.523908 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-04-05 12:15:24.660295 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-04-05 12:15:24.785033 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-04-05 12:15:24.785464 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-04-05 12:15:24.786364 | orchestrator | 2025-04-05 12:15:24.787280 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-04-05 12:15:24.787950 | orchestrator | 2025-04-05 12:15:24.788489 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-04-05 12:15:24.789210 | orchestrator | Saturday 05 April 2025 12:15:24 +0000 (0:00:00.638) 0:00:00.748 ******** 2025-04-05 12:15:26.860440 | orchestrator | ok: [testbed-manager] 2025-04-05 12:15:26.861925 | orchestrator | 2025-04-05 12:15:26.862439 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-04-05 12:15:26.864930 | orchestrator | 2025-04-05 12:15:26.867300 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-04-05 12:15:26.871381 | orchestrator | Saturday 05 April 2025 12:15:26 +0000 (0:00:02.073) 0:00:02.822 ******** 2025-04-05 12:15:28.842438 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:15:28.843302 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:15:28.844104 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:15:28.845959 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:15:28.846930 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:15:28.847866 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:15:28.849208 | orchestrator | 2025-04-05 12:15:28.850300 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-04-05 12:15:28.850917 | orchestrator | 2025-04-05 12:15:28.852418 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-04-05 12:15:28.853237 | orchestrator | Saturday 05 April 2025 12:15:28 +0000 (0:00:01.981) 0:00:04.803 ******** 2025-04-05 12:15:30.355486 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-05 12:15:30.356947 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-05 12:15:30.358345 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-05 12:15:30.359413 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-05 12:15:30.360374 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-05 12:15:30.361358 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-05 12:15:30.362226 | orchestrator | 2025-04-05 12:15:30.362635 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-04-05 12:15:30.363489 | orchestrator | Saturday 05 April 2025 12:15:30 +0000 (0:00:01.509) 0:00:06.313 ******** 2025-04-05 12:15:32.720347 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:15:32.722107 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:15:32.722676 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:15:32.722698 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:15:32.722717 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:15:32.723582 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:15:32.724347 | orchestrator | 2025-04-05 12:15:32.724999 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-04-05 12:15:32.725468 | orchestrator | Saturday 05 April 2025 12:15:32 +0000 (0:00:02.368) 0:00:08.682 ******** 2025-04-05 12:15:32.877666 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:15:32.957037 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:15:33.048105 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:15:33.122096 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:15:33.403896 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:15:33.406852 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:15:33.411398 | orchestrator | 2025-04-05 12:15:33.412315 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-04-05 12:15:33.412851 | orchestrator | 2025-04-05 12:15:33.413896 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-04-05 12:15:33.415401 | orchestrator | Saturday 05 April 2025 12:15:33 +0000 (0:00:00.683) 0:00:09.365 ******** 2025-04-05 12:15:35.428918 | orchestrator | changed: [testbed-manager] 2025-04-05 12:15:35.429241 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:15:35.430269 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:15:35.433992 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:15:35.434098 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:15:35.434119 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:15:35.434152 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:15:35.434166 | orchestrator | 2025-04-05 12:15:35.434186 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-04-05 12:15:35.434626 | orchestrator | Saturday 05 April 2025 12:15:35 +0000 (0:00:02.025) 0:00:11.390 ******** 2025-04-05 12:15:37.078913 | orchestrator | changed: [testbed-manager] 2025-04-05 12:15:37.080582 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:15:37.082162 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:15:37.083537 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:15:37.084678 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:15:37.086409 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:15:37.087555 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:15:37.088176 | orchestrator | 2025-04-05 12:15:37.089171 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-04-05 12:15:37.090235 | orchestrator | Saturday 05 April 2025 12:15:37 +0000 (0:00:01.645) 0:00:13.036 ******** 2025-04-05 12:15:38.718927 | orchestrator | ok: [testbed-manager] 2025-04-05 12:15:38.720353 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:15:38.721469 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:15:38.723108 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:15:38.724623 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:15:38.727000 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:15:38.727544 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:15:38.729355 | orchestrator | 2025-04-05 12:15:38.730594 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-04-05 12:15:38.731496 | orchestrator | Saturday 05 April 2025 12:15:38 +0000 (0:00:01.641) 0:00:14.678 ******** 2025-04-05 12:15:40.570827 | orchestrator | changed: [testbed-manager] 2025-04-05 12:15:40.575217 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:15:40.575474 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:15:40.577441 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:15:40.579085 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:15:40.581023 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:15:40.582259 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:15:40.583698 | orchestrator | 2025-04-05 12:15:40.584632 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-04-05 12:15:40.585970 | orchestrator | Saturday 05 April 2025 12:15:40 +0000 (0:00:01.849) 0:00:16.528 ******** 2025-04-05 12:15:40.724448 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:15:40.804008 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:15:40.881139 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:15:40.959903 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:15:41.034892 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:15:41.170695 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:15:41.171841 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:15:41.175751 | orchestrator | 2025-04-05 12:15:41.178304 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-04-05 12:15:41.179400 | orchestrator | 2025-04-05 12:15:41.180476 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-04-05 12:15:41.181266 | orchestrator | Saturday 05 April 2025 12:15:41 +0000 (0:00:00.602) 0:00:17.130 ******** 2025-04-05 12:15:44.316166 | orchestrator | ok: [testbed-manager] 2025-04-05 12:15:44.317327 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:15:44.319065 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:15:44.320710 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:15:44.321675 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:15:44.322868 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:15:44.323483 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:15:44.324272 | orchestrator | 2025-04-05 12:15:44.324724 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:15:44.325909 | orchestrator | 2025-04-05 12:15:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:15:44.326430 | orchestrator | 2025-04-05 12:15:44 | INFO  | Please wait and do not abort execution. 2025-04-05 12:15:44.326458 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-05 12:15:44.327230 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:15:44.327481 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:15:44.328277 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:15:44.328788 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:15:44.329653 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:15:44.330342 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:15:44.331124 | orchestrator | 2025-04-05 12:15:44.331487 | orchestrator | 2025-04-05 12:15:44.332233 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:15:44.332556 | orchestrator | Saturday 05 April 2025 12:15:44 +0000 (0:00:03.144) 0:00:20.275 ******** 2025-04-05 12:15:44.333520 | orchestrator | =============================================================================== 2025-04-05 12:15:44.334142 | orchestrator | Install python3-docker -------------------------------------------------- 3.14s 2025-04-05 12:15:44.336222 | orchestrator | Run update-ca-certificates ---------------------------------------------- 2.37s 2025-04-05 12:15:44.336766 | orchestrator | Apply netplan configuration --------------------------------------------- 2.07s 2025-04-05 12:15:44.337412 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 2.03s 2025-04-05 12:15:44.338103 | orchestrator | Apply netplan configuration --------------------------------------------- 1.98s 2025-04-05 12:15:44.338899 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.85s 2025-04-05 12:15:44.339294 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.65s 2025-04-05 12:15:44.339868 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.64s 2025-04-05 12:15:44.340417 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.51s 2025-04-05 12:15:44.340852 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.68s 2025-04-05 12:15:44.341382 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.64s 2025-04-05 12:15:44.342102 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.60s 2025-04-05 12:15:44.810656 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-04-05 12:15:46.430311 | orchestrator | 2025-04-05 12:15:46 | INFO  | Task fa1bce2f-3668-4d14-aa2f-aa1d6bbf5718 (reboot) was prepared for execution. 2025-04-05 12:15:49.908606 | orchestrator | 2025-04-05 12:15:46 | INFO  | It takes a moment until task fa1bce2f-3668-4d14-aa2f-aa1d6bbf5718 (reboot) has been started and output is visible here. 2025-04-05 12:15:49.908743 | orchestrator | 2025-04-05 12:15:49.909521 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-05 12:15:49.910797 | orchestrator | 2025-04-05 12:15:49.911571 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-05 12:15:49.912185 | orchestrator | Saturday 05 April 2025 12:15:49 +0000 (0:00:00.152) 0:00:00.152 ******** 2025-04-05 12:15:49.997660 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:15:49.998974 | orchestrator | 2025-04-05 12:15:49.999053 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-05 12:15:49.999870 | orchestrator | Saturday 05 April 2025 12:15:49 +0000 (0:00:00.091) 0:00:00.243 ******** 2025-04-05 12:15:50.844871 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:15:50.845344 | orchestrator | 2025-04-05 12:15:50.845892 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-05 12:15:50.847219 | orchestrator | Saturday 05 April 2025 12:15:50 +0000 (0:00:00.846) 0:00:01.090 ******** 2025-04-05 12:15:50.936394 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:15:50.937339 | orchestrator | 2025-04-05 12:15:50.938530 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-05 12:15:50.939461 | orchestrator | 2025-04-05 12:15:50.939846 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-05 12:15:50.940370 | orchestrator | Saturday 05 April 2025 12:15:50 +0000 (0:00:00.090) 0:00:01.181 ******** 2025-04-05 12:15:51.022890 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:15:51.023586 | orchestrator | 2025-04-05 12:15:51.024456 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-05 12:15:51.025433 | orchestrator | Saturday 05 April 2025 12:15:51 +0000 (0:00:00.087) 0:00:01.269 ******** 2025-04-05 12:15:51.649268 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:15:51.650849 | orchestrator | 2025-04-05 12:15:51.651546 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-05 12:15:51.652070 | orchestrator | Saturday 05 April 2025 12:15:51 +0000 (0:00:00.625) 0:00:01.894 ******** 2025-04-05 12:15:51.754285 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:15:51.754856 | orchestrator | 2025-04-05 12:15:51.755122 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-05 12:15:51.755460 | orchestrator | 2025-04-05 12:15:51.755946 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-05 12:15:51.756302 | orchestrator | Saturday 05 April 2025 12:15:51 +0000 (0:00:00.105) 0:00:02.000 ******** 2025-04-05 12:15:51.890670 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:15:51.890754 | orchestrator | 2025-04-05 12:15:51.891133 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-05 12:15:51.891441 | orchestrator | Saturday 05 April 2025 12:15:51 +0000 (0:00:00.137) 0:00:02.137 ******** 2025-04-05 12:15:52.579658 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:15:52.580066 | orchestrator | 2025-04-05 12:15:52.581214 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-05 12:15:52.581775 | orchestrator | Saturday 05 April 2025 12:15:52 +0000 (0:00:00.686) 0:00:02.824 ******** 2025-04-05 12:15:52.681024 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:15:52.682069 | orchestrator | 2025-04-05 12:15:52.682971 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-05 12:15:52.683325 | orchestrator | 2025-04-05 12:15:52.683956 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-05 12:15:52.684451 | orchestrator | Saturday 05 April 2025 12:15:52 +0000 (0:00:00.101) 0:00:02.926 ******** 2025-04-05 12:15:52.765914 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:15:52.766157 | orchestrator | 2025-04-05 12:15:52.766730 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-05 12:15:52.767463 | orchestrator | Saturday 05 April 2025 12:15:52 +0000 (0:00:00.085) 0:00:03.011 ******** 2025-04-05 12:15:53.459782 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:15:53.460380 | orchestrator | 2025-04-05 12:15:53.461287 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-05 12:15:53.462240 | orchestrator | Saturday 05 April 2025 12:15:53 +0000 (0:00:00.693) 0:00:03.704 ******** 2025-04-05 12:15:53.556056 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:15:53.557452 | orchestrator | 2025-04-05 12:15:53.557692 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-05 12:15:53.558672 | orchestrator | 2025-04-05 12:15:53.559500 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-05 12:15:53.560174 | orchestrator | Saturday 05 April 2025 12:15:53 +0000 (0:00:00.096) 0:00:03.801 ******** 2025-04-05 12:15:53.658106 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:15:53.658544 | orchestrator | 2025-04-05 12:15:53.659366 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-05 12:15:53.660041 | orchestrator | Saturday 05 April 2025 12:15:53 +0000 (0:00:00.102) 0:00:03.903 ******** 2025-04-05 12:15:54.324480 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:15:54.325660 | orchestrator | 2025-04-05 12:15:54.326302 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-05 12:15:54.327586 | orchestrator | Saturday 05 April 2025 12:15:54 +0000 (0:00:00.665) 0:00:04.568 ******** 2025-04-05 12:15:54.430729 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:15:54.432142 | orchestrator | 2025-04-05 12:15:54.433514 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-05 12:15:54.434269 | orchestrator | 2025-04-05 12:15:54.434826 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-05 12:15:54.435370 | orchestrator | Saturday 05 April 2025 12:15:54 +0000 (0:00:00.104) 0:00:04.673 ******** 2025-04-05 12:15:54.526438 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:15:54.527260 | orchestrator | 2025-04-05 12:15:54.528368 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-05 12:15:54.528853 | orchestrator | Saturday 05 April 2025 12:15:54 +0000 (0:00:00.098) 0:00:04.771 ******** 2025-04-05 12:15:55.179788 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:15:55.180214 | orchestrator | 2025-04-05 12:15:55.180246 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-05 12:15:55.180894 | orchestrator | Saturday 05 April 2025 12:15:55 +0000 (0:00:00.651) 0:00:05.423 ******** 2025-04-05 12:15:55.215032 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:15:55.215300 | orchestrator | 2025-04-05 12:15:55.215326 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:15:55.215663 | orchestrator | 2025-04-05 12:15:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:15:55.215890 | orchestrator | 2025-04-05 12:15:55 | INFO  | Please wait and do not abort execution. 2025-04-05 12:15:55.217176 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:15:55.218298 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:15:55.219506 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:15:55.220486 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:15:55.220863 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:15:55.221334 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:15:55.221578 | orchestrator | 2025-04-05 12:15:55.222101 | orchestrator | 2025-04-05 12:15:55.222409 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:15:55.222820 | orchestrator | Saturday 05 April 2025 12:15:55 +0000 (0:00:00.036) 0:00:05.459 ******** 2025-04-05 12:15:55.223188 | orchestrator | =============================================================================== 2025-04-05 12:15:55.223512 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.17s 2025-04-05 12:15:55.223894 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.60s 2025-04-05 12:15:55.224233 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.54s 2025-04-05 12:15:55.674304 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-04-05 12:15:57.264116 | orchestrator | 2025-04-05 12:15:57 | INFO  | Task 2d615c1d-ca39-4c97-b111-256c0516a8ac (wait-for-connection) was prepared for execution. 2025-04-05 12:16:00.763163 | orchestrator | 2025-04-05 12:15:57 | INFO  | It takes a moment until task 2d615c1d-ca39-4c97-b111-256c0516a8ac (wait-for-connection) has been started and output is visible here. 2025-04-05 12:16:00.763292 | orchestrator | 2025-04-05 12:16:00.765767 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-04-05 12:16:12.210755 | orchestrator | 2025-04-05 12:16:12.210936 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-04-05 12:16:12.210977 | orchestrator | Saturday 05 April 2025 12:16:00 +0000 (0:00:00.170) 0:00:00.170 ******** 2025-04-05 12:16:12.211011 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:16:12.211276 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:16:12.211305 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:16:12.213571 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:16:12.214007 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:16:12.214067 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:16:12.214234 | orchestrator | 2025-04-05 12:16:12.215148 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:16:12.215601 | orchestrator | 2025-04-05 12:16:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:16:12.217324 | orchestrator | 2025-04-05 12:16:12 | INFO  | Please wait and do not abort execution. 2025-04-05 12:16:12.217353 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:16:12.218090 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:16:12.218981 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:16:12.219523 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:16:12.220134 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:16:12.220718 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:16:12.221230 | orchestrator | 2025-04-05 12:16:12.221746 | orchestrator | 2025-04-05 12:16:12.222273 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:16:12.222822 | orchestrator | Saturday 05 April 2025 12:16:12 +0000 (0:00:11.448) 0:00:11.618 ******** 2025-04-05 12:16:12.223226 | orchestrator | =============================================================================== 2025-04-05 12:16:12.223682 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.45s 2025-04-05 12:16:12.667166 | orchestrator | + osism apply hddtemp 2025-04-05 12:16:14.228993 | orchestrator | 2025-04-05 12:16:14 | INFO  | Task 0670bf1d-2e15-4368-bb41-94da16c57c2c (hddtemp) was prepared for execution. 2025-04-05 12:16:18.022649 | orchestrator | 2025-04-05 12:16:14 | INFO  | It takes a moment until task 0670bf1d-2e15-4368-bb41-94da16c57c2c (hddtemp) has been started and output is visible here. 2025-04-05 12:16:18.022874 | orchestrator | 2025-04-05 12:16:18.026109 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-04-05 12:16:18.026287 | orchestrator | 2025-04-05 12:16:18.027167 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-04-05 12:16:18.027492 | orchestrator | Saturday 05 April 2025 12:16:18 +0000 (0:00:00.217) 0:00:00.217 ******** 2025-04-05 12:16:18.133314 | orchestrator | ok: [testbed-manager] 2025-04-05 12:16:18.191676 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:16:18.250890 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:16:18.306880 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:16:18.418568 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:16:18.539552 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:16:18.540036 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:16:18.544089 | orchestrator | 2025-04-05 12:16:19.540854 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-04-05 12:16:19.541102 | orchestrator | Saturday 05 April 2025 12:16:18 +0000 (0:00:00.516) 0:00:00.733 ******** 2025-04-05 12:16:19.541143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:16:19.543177 | orchestrator | 2025-04-05 12:16:21.641408 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-04-05 12:16:21.641492 | orchestrator | Saturday 05 April 2025 12:16:19 +0000 (0:00:00.999) 0:00:01.733 ******** 2025-04-05 12:16:21.641521 | orchestrator | ok: [testbed-manager] 2025-04-05 12:16:21.645175 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:16:21.646254 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:16:21.646277 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:16:21.646319 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:16:21.646369 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:16:21.647296 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:16:21.648056 | orchestrator | 2025-04-05 12:16:21.649665 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-04-05 12:16:22.153587 | orchestrator | Saturday 05 April 2025 12:16:21 +0000 (0:00:02.102) 0:00:03.836 ******** 2025-04-05 12:16:22.153670 | orchestrator | changed: [testbed-manager] 2025-04-05 12:16:22.234362 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:16:22.699431 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:16:22.700516 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:16:22.701358 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:16:22.704144 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:16:22.704764 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:16:22.704787 | orchestrator | 2025-04-05 12:16:22.704822 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-04-05 12:16:22.705588 | orchestrator | Saturday 05 April 2025 12:16:22 +0000 (0:00:01.056) 0:00:04.892 ******** 2025-04-05 12:16:23.767061 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:16:23.767907 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:16:23.769395 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:16:23.770776 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:16:23.774393 | orchestrator | ok: [testbed-manager] 2025-04-05 12:16:23.774423 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:16:24.191820 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:16:24.191890 | orchestrator | 2025-04-05 12:16:24.191906 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-04-05 12:16:24.191920 | orchestrator | Saturday 05 April 2025 12:16:23 +0000 (0:00:01.069) 0:00:05.962 ******** 2025-04-05 12:16:24.191942 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:16:24.289695 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:16:24.366560 | orchestrator | changed: [testbed-manager] 2025-04-05 12:16:24.445709 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:16:24.562974 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:16:24.566654 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:16:24.567270 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:16:24.567499 | orchestrator | 2025-04-05 12:16:24.568348 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-04-05 12:16:24.568732 | orchestrator | Saturday 05 April 2025 12:16:24 +0000 (0:00:00.794) 0:00:06.756 ******** 2025-04-05 12:16:36.700977 | orchestrator | changed: [testbed-manager] 2025-04-05 12:16:36.701743 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:16:36.701920 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:16:36.702118 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:16:36.703337 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:16:36.703926 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:16:36.704282 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:16:36.705178 | orchestrator | 2025-04-05 12:16:36.705701 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-04-05 12:16:36.706233 | orchestrator | Saturday 05 April 2025 12:16:36 +0000 (0:00:12.132) 0:00:18.889 ******** 2025-04-05 12:16:37.844383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:16:37.844544 | orchestrator | 2025-04-05 12:16:37.845102 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-04-05 12:16:37.845951 | orchestrator | Saturday 05 April 2025 12:16:37 +0000 (0:00:01.147) 0:00:20.036 ******** 2025-04-05 12:16:39.734556 | orchestrator | changed: [testbed-manager] 2025-04-05 12:16:39.734944 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:16:39.736492 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:16:39.737085 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:16:39.738433 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:16:39.741087 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:16:39.741991 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:16:39.742626 | orchestrator | 2025-04-05 12:16:39.745100 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:16:39.745146 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:16:39.745182 | orchestrator | 2025-04-05 12:16:39 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:16:39.745760 | orchestrator | 2025-04-05 12:16:39 | INFO  | Please wait and do not abort execution. 2025-04-05 12:16:39.745791 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-05 12:16:39.746530 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-05 12:16:39.747345 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-05 12:16:39.748416 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-05 12:16:39.749438 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-05 12:16:39.749980 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-05 12:16:39.750392 | orchestrator | 2025-04-05 12:16:39.750887 | orchestrator | 2025-04-05 12:16:39.751342 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:16:39.751677 | orchestrator | Saturday 05 April 2025 12:16:39 +0000 (0:00:01.891) 0:00:21.928 ******** 2025-04-05 12:16:39.752145 | orchestrator | =============================================================================== 2025-04-05 12:16:39.752529 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.13s 2025-04-05 12:16:39.752976 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.10s 2025-04-05 12:16:39.753417 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.89s 2025-04-05 12:16:39.755108 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.15s 2025-04-05 12:16:39.755453 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.07s 2025-04-05 12:16:39.756019 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.06s 2025-04-05 12:16:39.756351 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.00s 2025-04-05 12:16:39.756935 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.79s 2025-04-05 12:16:39.757341 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.52s 2025-04-05 12:16:40.250137 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-04-05 12:16:41.539174 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-04-05 12:16:41.539585 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-04-05 12:16:41.539602 | orchestrator | + local max_attempts=60 2025-04-05 12:16:41.539609 | orchestrator | + local name=ceph-ansible 2025-04-05 12:16:41.539615 | orchestrator | + local attempt_num=1 2025-04-05 12:16:41.539625 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-04-05 12:16:41.568866 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-05 12:16:41.569094 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-04-05 12:16:41.569114 | orchestrator | + local max_attempts=60 2025-04-05 12:16:41.569124 | orchestrator | + local name=kolla-ansible 2025-04-05 12:16:41.569135 | orchestrator | + local attempt_num=1 2025-04-05 12:16:41.569149 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-04-05 12:16:41.596458 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-05 12:16:41.597321 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-04-05 12:16:41.597340 | orchestrator | + local max_attempts=60 2025-04-05 12:16:41.597352 | orchestrator | + local name=osism-ansible 2025-04-05 12:16:41.597363 | orchestrator | + local attempt_num=1 2025-04-05 12:16:41.597377 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-04-05 12:16:41.624482 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-05 12:16:41.765845 | orchestrator | + [[ true == \t\r\u\e ]] 2025-04-05 12:16:41.765945 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-04-05 12:16:41.765980 | orchestrator | ARA in ceph-ansible already disabled. 2025-04-05 12:16:41.902920 | orchestrator | ARA in kolla-ansible already disabled. 2025-04-05 12:16:42.091175 | orchestrator | ARA in osism-ansible already disabled. 2025-04-05 12:16:42.262645 | orchestrator | ARA in osism-kubernetes already disabled. 2025-04-05 12:16:42.263541 | orchestrator | + osism apply gather-facts 2025-04-05 12:16:43.842429 | orchestrator | 2025-04-05 12:16:43 | INFO  | Task 0d135463-e4a5-47d3-b3b9-c0ed098bef51 (gather-facts) was prepared for execution. 2025-04-05 12:16:47.362217 | orchestrator | 2025-04-05 12:16:43 | INFO  | It takes a moment until task 0d135463-e4a5-47d3-b3b9-c0ed098bef51 (gather-facts) has been started and output is visible here. 2025-04-05 12:16:47.362353 | orchestrator | 2025-04-05 12:16:47.362566 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-05 12:16:47.365501 | orchestrator | 2025-04-05 12:16:47.367073 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-05 12:16:47.370004 | orchestrator | Saturday 05 April 2025 12:16:47 +0000 (0:00:00.161) 0:00:00.161 ******** 2025-04-05 12:16:52.342257 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:16:52.342464 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:16:52.343714 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:16:52.346778 | orchestrator | ok: [testbed-manager] 2025-04-05 12:16:52.347155 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:16:52.347183 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:16:52.347204 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:16:52.347849 | orchestrator | 2025-04-05 12:16:52.349146 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-05 12:16:52.350192 | orchestrator | 2025-04-05 12:16:52.353781 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-05 12:16:52.354302 | orchestrator | Saturday 05 April 2025 12:16:52 +0000 (0:00:04.983) 0:00:05.144 ******** 2025-04-05 12:16:52.480969 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:16:52.547233 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:16:52.615390 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:16:52.684977 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:16:52.746855 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:16:52.777719 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:16:52.778791 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:16:52.779563 | orchestrator | 2025-04-05 12:16:52.780463 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:16:52.780746 | orchestrator | 2025-04-05 12:16:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:16:52.780985 | orchestrator | 2025-04-05 12:16:52 | INFO  | Please wait and do not abort execution. 2025-04-05 12:16:52.781626 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-05 12:16:52.783518 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-05 12:16:52.783972 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-05 12:16:52.784452 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-05 12:16:52.784992 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-05 12:16:52.785454 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-05 12:16:52.785985 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-05 12:16:52.786632 | orchestrator | 2025-04-05 12:16:52.787093 | orchestrator | 2025-04-05 12:16:52.787621 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:16:52.787989 | orchestrator | Saturday 05 April 2025 12:16:52 +0000 (0:00:00.435) 0:00:05.580 ******** 2025-04-05 12:16:52.788393 | orchestrator | =============================================================================== 2025-04-05 12:16:52.788867 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.98s 2025-04-05 12:16:52.789496 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.44s 2025-04-05 12:16:53.198085 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-04-05 12:16:53.212783 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-04-05 12:16:53.222400 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-04-05 12:16:53.243418 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-04-05 12:16:53.260041 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-04-05 12:16:53.275350 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-04-05 12:16:53.293305 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-04-05 12:16:53.310628 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-04-05 12:16:53.328945 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-04-05 12:16:53.347067 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-04-05 12:16:53.364177 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-04-05 12:16:53.382604 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-04-05 12:16:53.399943 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-04-05 12:16:53.418727 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-04-05 12:16:53.435327 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-04-05 12:16:53.452642 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-04-05 12:16:53.469105 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-04-05 12:16:53.483782 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-04-05 12:16:53.500385 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-04-05 12:16:53.517034 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-04-05 12:16:53.533921 | orchestrator | + [[ false == \t\r\u\e ]] 2025-04-05 12:16:53.682841 | orchestrator | changed 2025-04-05 12:16:53.767588 | 2025-04-05 12:16:53.767714 | TASK [Deploy services] 2025-04-05 12:16:53.874792 | orchestrator | skipping: Conditional result was False 2025-04-05 12:16:53.893610 | 2025-04-05 12:16:53.893765 | TASK [Deploy in a nutshell] 2025-04-05 12:16:54.571905 | orchestrator | + set -e 2025-04-05 12:16:54.572072 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-05 12:16:54.572103 | orchestrator | ++ export INTERACTIVE=false 2025-04-05 12:16:54.572121 | orchestrator | ++ INTERACTIVE=false 2025-04-05 12:16:54.572164 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-05 12:16:54.572183 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-05 12:16:54.572198 | orchestrator | + source /opt/manager-vars.sh 2025-04-05 12:16:54.572226 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-05 12:16:54.572250 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-05 12:16:54.572278 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-05 12:16:54.573076 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-05 12:16:54.573098 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-05 12:16:54.573113 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-05 12:16:54.573127 | orchestrator | ++ export MANAGER_VERSION=latest 2025-04-05 12:16:54.573142 | orchestrator | ++ MANAGER_VERSION=latest 2025-04-05 12:16:54.573156 | orchestrator | 2025-04-05 12:16:54.573171 | orchestrator | # PULL IMAGES 2025-04-05 12:16:54.573186 | orchestrator | 2025-04-05 12:16:54.573200 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-05 12:16:54.573214 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-05 12:16:54.573228 | orchestrator | ++ export ARA=false 2025-04-05 12:16:54.573243 | orchestrator | ++ ARA=false 2025-04-05 12:16:54.573257 | orchestrator | ++ export TEMPEST=false 2025-04-05 12:16:54.573271 | orchestrator | ++ TEMPEST=false 2025-04-05 12:16:54.573285 | orchestrator | ++ export IS_ZUUL=true 2025-04-05 12:16:54.573299 | orchestrator | ++ IS_ZUUL=true 2025-04-05 12:16:54.573313 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-04-05 12:16:54.573327 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-04-05 12:16:54.573341 | orchestrator | ++ export EXTERNAL_API=false 2025-04-05 12:16:54.573356 | orchestrator | ++ EXTERNAL_API=false 2025-04-05 12:16:54.573370 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-05 12:16:54.573383 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-05 12:16:54.573404 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-05 12:16:54.573419 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-05 12:16:54.573433 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-05 12:16:54.573447 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-05 12:16:54.573461 | orchestrator | + echo 2025-04-05 12:16:54.573475 | orchestrator | + echo '# PULL IMAGES' 2025-04-05 12:16:54.573488 | orchestrator | + echo 2025-04-05 12:16:54.573507 | orchestrator | ++ semver latest 7.0.0 2025-04-05 12:16:54.610013 | orchestrator | + [[ -1 -ge 0 ]] 2025-04-05 12:16:56.071698 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-04-05 12:16:56.071778 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-04-05 12:16:56.071832 | orchestrator | 2025-04-05 12:16:56 | INFO  | Trying to run play pull-images in environment custom 2025-04-05 12:16:56.128168 | orchestrator | 2025-04-05 12:16:56 | INFO  | Task e643fa90-b53b-4790-bdff-1e179d0dddea (pull-images) was prepared for execution. 2025-04-05 12:16:59.587161 | orchestrator | 2025-04-05 12:16:56 | INFO  | It takes a moment until task e643fa90-b53b-4790-bdff-1e179d0dddea (pull-images) has been started and output is visible here. 2025-04-05 12:16:59.789431 | orchestrator | 2025-04-05 12:17:46.075540 | orchestrator | PLAY [Pull images] ************************************************************* 2025-04-05 12:17:46.075659 | orchestrator | 2025-04-05 12:17:46.075673 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-04-05 12:17:46.075683 | orchestrator | Saturday 05 April 2025 12:16:59 +0000 (0:00:00.113) 0:00:00.113 ******** 2025-04-05 12:17:46.075708 | orchestrator | changed: [testbed-manager] 2025-04-05 12:18:32.470906 | orchestrator | 2025-04-05 12:18:32.471044 | orchestrator | TASK [Pull other images] ******************************************************* 2025-04-05 12:18:32.471065 | orchestrator | Saturday 05 April 2025 12:17:46 +0000 (0:00:46.487) 0:00:46.601 ******** 2025-04-05 12:18:32.471097 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-04-05 12:18:32.472215 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-04-05 12:18:32.472249 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-04-05 12:18:32.472269 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-04-05 12:18:32.472296 | orchestrator | changed: [testbed-manager] => (item=common) 2025-04-05 12:18:32.472311 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-04-05 12:18:32.472329 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-04-05 12:18:32.472374 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-04-05 12:18:32.472389 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-04-05 12:18:32.472416 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-04-05 12:18:32.475688 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-04-05 12:18:32.477409 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-04-05 12:18:32.477451 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-04-05 12:18:32.477477 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-04-05 12:18:32.477501 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-04-05 12:18:32.477524 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-04-05 12:18:32.477544 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-04-05 12:18:32.477558 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-04-05 12:18:32.477572 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-04-05 12:18:32.477585 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-04-05 12:18:32.477599 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-04-05 12:18:32.477620 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-04-05 12:18:32.477652 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-04-05 12:18:32.477863 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-04-05 12:18:32.478216 | orchestrator | 2025-04-05 12:18:32.478553 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:18:32.478841 | orchestrator | 2025-04-05 12:18:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:18:32.478870 | orchestrator | 2025-04-05 12:18:32 | INFO  | Please wait and do not abort execution. 2025-04-05 12:18:32.478891 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:18:32.479151 | orchestrator | 2025-04-05 12:18:32.479440 | orchestrator | 2025-04-05 12:18:32.479825 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:18:32.481578 | orchestrator | Saturday 05 April 2025 12:18:32 +0000 (0:00:46.392) 0:01:32.994 ******** 2025-04-05 12:18:32.481923 | orchestrator | =============================================================================== 2025-04-05 12:18:32.482192 | orchestrator | Pull keystone image ---------------------------------------------------- 46.49s 2025-04-05 12:18:32.482222 | orchestrator | Pull other images ------------------------------------------------------ 46.39s 2025-04-05 12:18:34.298101 | orchestrator | 2025-04-05 12:18:34 | INFO  | Trying to run play wipe-partitions in environment custom 2025-04-05 12:18:34.349716 | orchestrator | 2025-04-05 12:18:34 | INFO  | Task 18ee35aa-4118-4891-9c5a-d498b05aebd8 (wipe-partitions) was prepared for execution. 2025-04-05 12:18:37.870862 | orchestrator | 2025-04-05 12:18:34 | INFO  | It takes a moment until task 18ee35aa-4118-4891-9c5a-d498b05aebd8 (wipe-partitions) has been started and output is visible here. 2025-04-05 12:18:37.870990 | orchestrator | 2025-04-05 12:18:37.871060 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-04-05 12:18:37.871079 | orchestrator | 2025-04-05 12:18:37.871130 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-04-05 12:18:37.871187 | orchestrator | Saturday 05 April 2025 12:18:37 +0000 (0:00:00.120) 0:00:00.120 ******** 2025-04-05 12:18:38.467861 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:18:38.468310 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:18:38.468339 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:18:38.468358 | orchestrator | 2025-04-05 12:18:38.468529 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-04-05 12:18:38.468741 | orchestrator | Saturday 05 April 2025 12:18:38 +0000 (0:00:00.598) 0:00:00.718 ******** 2025-04-05 12:18:38.633469 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:18:38.727534 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:18:38.727901 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:18:38.727935 | orchestrator | 2025-04-05 12:18:38.728156 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-04-05 12:18:38.728438 | orchestrator | Saturday 05 April 2025 12:18:38 +0000 (0:00:00.258) 0:00:00.977 ******** 2025-04-05 12:18:39.330635 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:18:39.330764 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:18:39.330785 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:18:39.330831 | orchestrator | 2025-04-05 12:18:39.331275 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-04-05 12:18:39.465664 | orchestrator | Saturday 05 April 2025 12:18:39 +0000 (0:00:00.604) 0:00:01.582 ******** 2025-04-05 12:18:39.465738 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:18:39.545478 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:18:39.545635 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:18:39.545951 | orchestrator | 2025-04-05 12:18:39.545983 | orchestrator | TASK [Check device availability] *********************************************** 2025-04-05 12:18:39.546298 | orchestrator | Saturday 05 April 2025 12:18:39 +0000 (0:00:00.214) 0:00:01.796 ******** 2025-04-05 12:18:40.661961 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-04-05 12:18:40.666089 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-04-05 12:18:40.667428 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-04-05 12:18:40.667454 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-04-05 12:18:40.667475 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-04-05 12:18:40.668478 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-04-05 12:18:40.669356 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-04-05 12:18:40.670501 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-04-05 12:18:40.671134 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-04-05 12:18:40.671849 | orchestrator | 2025-04-05 12:18:40.672574 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-04-05 12:18:40.673082 | orchestrator | Saturday 05 April 2025 12:18:40 +0000 (0:00:01.115) 0:00:02.911 ******** 2025-04-05 12:18:41.831620 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-04-05 12:18:41.832134 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-04-05 12:18:41.833488 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-04-05 12:18:41.833914 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-04-05 12:18:41.839862 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-04-05 12:18:41.840689 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-04-05 12:18:41.840715 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-04-05 12:18:41.840731 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-04-05 12:18:41.840751 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-04-05 12:18:41.841361 | orchestrator | 2025-04-05 12:18:41.841390 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-04-05 12:18:41.841656 | orchestrator | Saturday 05 April 2025 12:18:41 +0000 (0:00:01.170) 0:00:04.082 ******** 2025-04-05 12:18:43.805221 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-04-05 12:18:43.806120 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-04-05 12:18:43.807647 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-04-05 12:18:43.813788 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-04-05 12:18:43.814272 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-04-05 12:18:43.814767 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-04-05 12:18:43.815287 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-04-05 12:18:43.817749 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-04-05 12:18:43.821815 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-04-05 12:18:43.821845 | orchestrator | 2025-04-05 12:18:43.821884 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-04-05 12:18:43.821906 | orchestrator | Saturday 05 April 2025 12:18:43 +0000 (0:00:01.972) 0:00:06.054 ******** 2025-04-05 12:18:44.369134 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:18:44.369976 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:18:44.370437 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:18:44.371191 | orchestrator | 2025-04-05 12:18:44.372593 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-04-05 12:18:44.373866 | orchestrator | Saturday 05 April 2025 12:18:44 +0000 (0:00:00.565) 0:00:06.620 ******** 2025-04-05 12:18:44.937175 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:18:44.937323 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:18:44.937348 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:18:44.937588 | orchestrator | 2025-04-05 12:18:44.938254 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:18:44.938561 | orchestrator | 2025-04-05 12:18:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:18:44.938587 | orchestrator | 2025-04-05 12:18:44 | INFO  | Please wait and do not abort execution. 2025-04-05 12:18:44.938613 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:18:44.938921 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:18:44.939267 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:18:44.943660 | orchestrator | 2025-04-05 12:18:44.943863 | orchestrator | 2025-04-05 12:18:44.943888 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:18:44.943904 | orchestrator | Saturday 05 April 2025 12:18:44 +0000 (0:00:00.563) 0:00:07.183 ******** 2025-04-05 12:18:44.943918 | orchestrator | =============================================================================== 2025-04-05 12:18:44.943937 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 1.97s 2025-04-05 12:18:44.944130 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.17s 2025-04-05 12:18:44.944317 | orchestrator | Check device availability ----------------------------------------------- 1.12s 2025-04-05 12:18:44.944551 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.60s 2025-04-05 12:18:44.944908 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.60s 2025-04-05 12:18:44.945299 | orchestrator | Reload udev rules ------------------------------------------------------- 0.57s 2025-04-05 12:18:44.945512 | orchestrator | Request device events from the kernel ----------------------------------- 0.56s 2025-04-05 12:18:44.945908 | orchestrator | Remove all rook related logical devices --------------------------------- 0.26s 2025-04-05 12:18:44.954846 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.21s 2025-04-05 12:18:46.664747 | orchestrator | 2025-04-05 12:18:46 | INFO  | Task 6e738c89-e613-4eb3-99eb-ba8a740980a9 (facts) was prepared for execution. 2025-04-05 12:18:50.195769 | orchestrator | 2025-04-05 12:18:46 | INFO  | It takes a moment until task 6e738c89-e613-4eb3-99eb-ba8a740980a9 (facts) has been started and output is visible here. 2025-04-05 12:18:50.195900 | orchestrator | 2025-04-05 12:18:50.196082 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-04-05 12:18:50.196414 | orchestrator | 2025-04-05 12:18:50.196449 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-05 12:18:50.196787 | orchestrator | Saturday 05 April 2025 12:18:50 +0000 (0:00:00.229) 0:00:00.229 ******** 2025-04-05 12:18:51.282415 | orchestrator | ok: [testbed-manager] 2025-04-05 12:18:51.282771 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:18:51.288922 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:18:51.290951 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:18:51.290993 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:18:51.291008 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:18:51.291029 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:18:51.291420 | orchestrator | 2025-04-05 12:18:51.291456 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-05 12:18:51.291479 | orchestrator | Saturday 05 April 2025 12:18:51 +0000 (0:00:01.086) 0:00:01.316 ******** 2025-04-05 12:18:51.433965 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:18:51.504434 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:18:51.579146 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:18:51.652985 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:18:51.724683 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:18:52.348481 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:18:52.349973 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:18:52.351572 | orchestrator | 2025-04-05 12:18:52.353777 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-05 12:18:52.354183 | orchestrator | 2025-04-05 12:18:52.355900 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-05 12:18:52.357035 | orchestrator | Saturday 05 April 2025 12:18:52 +0000 (0:00:01.069) 0:00:02.385 ******** 2025-04-05 12:18:56.372085 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:18:56.372968 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:18:56.373004 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:18:56.373028 | orchestrator | ok: [testbed-manager] 2025-04-05 12:18:56.373306 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:18:56.373770 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:18:56.374752 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:18:56.374943 | orchestrator | 2025-04-05 12:18:56.375277 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-05 12:18:56.375991 | orchestrator | 2025-04-05 12:18:56.376226 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-05 12:18:56.376962 | orchestrator | Saturday 05 April 2025 12:18:56 +0000 (0:00:04.023) 0:00:06.409 ******** 2025-04-05 12:18:56.525388 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:18:56.599643 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:18:56.673621 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:18:56.746207 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:18:56.824497 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:18:56.855324 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:18:56.855458 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:18:56.856134 | orchestrator | 2025-04-05 12:18:56.856953 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:18:56.857682 | orchestrator | 2025-04-05 12:18:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:18:56.857707 | orchestrator | 2025-04-05 12:18:56 | INFO  | Please wait and do not abort execution. 2025-04-05 12:18:56.857727 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:18:56.858171 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:18:56.858467 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:18:56.858953 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:18:56.859345 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:18:56.859665 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:18:56.860432 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:18:56.860612 | orchestrator | 2025-04-05 12:18:56.860643 | orchestrator | 2025-04-05 12:18:56.861037 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:18:56.861329 | orchestrator | Saturday 05 April 2025 12:18:56 +0000 (0:00:00.485) 0:00:06.894 ******** 2025-04-05 12:18:56.861655 | orchestrator | =============================================================================== 2025-04-05 12:18:56.862523 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.02s 2025-04-05 12:18:56.862905 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2025-04-05 12:18:56.862935 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.07s 2025-04-05 12:18:56.863180 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2025-04-05 12:18:59.073936 | orchestrator | 2025-04-05 12:18:59 | INFO  | Task 21369755-0347-411f-95f9-5d1317c03999 (ceph-configure-lvm-volumes) was prepared for execution. 2025-04-05 12:19:02.980829 | orchestrator | 2025-04-05 12:18:59 | INFO  | It takes a moment until task 21369755-0347-411f-95f9-5d1317c03999 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-04-05 12:19:02.980987 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-05 12:19:03.598295 | orchestrator | 2025-04-05 12:19:03.598948 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-05 12:19:03.599226 | orchestrator | 2025-04-05 12:19:03.599607 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-05 12:19:03.601129 | orchestrator | Saturday 05 April 2025 12:19:03 +0000 (0:00:00.522) 0:00:00.522 ******** 2025-04-05 12:19:03.842180 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-05 12:19:03.842859 | orchestrator | 2025-04-05 12:19:03.843744 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-05 12:19:03.844843 | orchestrator | Saturday 05 April 2025 12:19:03 +0000 (0:00:00.245) 0:00:00.768 ******** 2025-04-05 12:19:04.074234 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:19:04.077237 | orchestrator | 2025-04-05 12:19:04.077986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:04.078057 | orchestrator | Saturday 05 April 2025 12:19:04 +0000 (0:00:00.233) 0:00:01.001 ******** 2025-04-05 12:19:04.714516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-04-05 12:19:04.719100 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-04-05 12:19:04.720090 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-04-05 12:19:04.720116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-04-05 12:19:04.720135 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-04-05 12:19:04.720836 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-04-05 12:19:04.721353 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-04-05 12:19:04.722737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-04-05 12:19:04.723535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-04-05 12:19:04.725147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-04-05 12:19:04.725765 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-04-05 12:19:04.726668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-04-05 12:19:04.727095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-04-05 12:19:04.727995 | orchestrator | 2025-04-05 12:19:04.728188 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:04.729365 | orchestrator | Saturday 05 April 2025 12:19:04 +0000 (0:00:00.639) 0:00:01.640 ******** 2025-04-05 12:19:04.913821 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:04.915343 | orchestrator | 2025-04-05 12:19:04.916851 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:05.132987 | orchestrator | Saturday 05 April 2025 12:19:04 +0000 (0:00:00.200) 0:00:01.840 ******** 2025-04-05 12:19:05.133051 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:05.134327 | orchestrator | 2025-04-05 12:19:05.135518 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:05.136346 | orchestrator | Saturday 05 April 2025 12:19:05 +0000 (0:00:00.220) 0:00:02.061 ******** 2025-04-05 12:19:05.364169 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:05.365762 | orchestrator | 2025-04-05 12:19:05.366102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:05.366135 | orchestrator | Saturday 05 April 2025 12:19:05 +0000 (0:00:00.230) 0:00:02.292 ******** 2025-04-05 12:19:05.584026 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:05.585089 | orchestrator | 2025-04-05 12:19:05.585340 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:05.586150 | orchestrator | Saturday 05 April 2025 12:19:05 +0000 (0:00:00.217) 0:00:02.509 ******** 2025-04-05 12:19:05.858221 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:05.858903 | orchestrator | 2025-04-05 12:19:05.859358 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:05.860872 | orchestrator | Saturday 05 April 2025 12:19:05 +0000 (0:00:00.276) 0:00:02.786 ******** 2025-04-05 12:19:06.102298 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:06.102509 | orchestrator | 2025-04-05 12:19:06.103884 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:06.104742 | orchestrator | Saturday 05 April 2025 12:19:06 +0000 (0:00:00.244) 0:00:03.030 ******** 2025-04-05 12:19:06.301665 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:06.301916 | orchestrator | 2025-04-05 12:19:06.302998 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:06.303453 | orchestrator | Saturday 05 April 2025 12:19:06 +0000 (0:00:00.199) 0:00:03.229 ******** 2025-04-05 12:19:06.504587 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:06.505119 | orchestrator | 2025-04-05 12:19:06.511997 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:06.513057 | orchestrator | Saturday 05 April 2025 12:19:06 +0000 (0:00:00.201) 0:00:03.431 ******** 2025-04-05 12:19:07.173306 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04) 2025-04-05 12:19:07.174313 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04) 2025-04-05 12:19:07.176105 | orchestrator | 2025-04-05 12:19:07.179558 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:07.180553 | orchestrator | Saturday 05 April 2025 12:19:07 +0000 (0:00:00.668) 0:00:04.099 ******** 2025-04-05 12:19:07.860008 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4656da48-57a2-4eb8-982a-d76718d1cb02) 2025-04-05 12:19:07.861123 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4656da48-57a2-4eb8-982a-d76718d1cb02) 2025-04-05 12:19:07.861161 | orchestrator | 2025-04-05 12:19:07.863056 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:07.863182 | orchestrator | Saturday 05 April 2025 12:19:07 +0000 (0:00:00.687) 0:00:04.787 ******** 2025-04-05 12:19:08.372493 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_213baff1-89a7-4ff7-8a44-f121feb76d26) 2025-04-05 12:19:08.376994 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_213baff1-89a7-4ff7-8a44-f121feb76d26) 2025-04-05 12:19:08.891269 | orchestrator | 2025-04-05 12:19:08.891375 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:08.891392 | orchestrator | Saturday 05 April 2025 12:19:08 +0000 (0:00:00.510) 0:00:05.297 ******** 2025-04-05 12:19:08.891421 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ff9999ad-bea3-493e-9af1-c705049c2ab2) 2025-04-05 12:19:08.894257 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ff9999ad-bea3-493e-9af1-c705049c2ab2) 2025-04-05 12:19:08.895969 | orchestrator | 2025-04-05 12:19:08.896920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:08.897988 | orchestrator | Saturday 05 April 2025 12:19:08 +0000 (0:00:00.518) 0:00:05.816 ******** 2025-04-05 12:19:09.381562 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-05 12:19:09.381733 | orchestrator | 2025-04-05 12:19:09.381759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:09.383199 | orchestrator | Saturday 05 April 2025 12:19:09 +0000 (0:00:00.491) 0:00:06.308 ******** 2025-04-05 12:19:09.983218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-04-05 12:19:09.986363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-04-05 12:19:09.986615 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-04-05 12:19:09.986646 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-04-05 12:19:09.990710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-04-05 12:19:09.991411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-04-05 12:19:09.991745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-04-05 12:19:09.992440 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-04-05 12:19:09.992950 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-04-05 12:19:09.993427 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-04-05 12:19:09.994197 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-04-05 12:19:09.995118 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-04-05 12:19:09.995823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-04-05 12:19:09.996302 | orchestrator | 2025-04-05 12:19:09.997373 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:09.998188 | orchestrator | Saturday 05 April 2025 12:19:09 +0000 (0:00:00.602) 0:00:06.910 ******** 2025-04-05 12:19:10.213781 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:10.214010 | orchestrator | 2025-04-05 12:19:10.214757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:10.215153 | orchestrator | Saturday 05 April 2025 12:19:10 +0000 (0:00:00.232) 0:00:07.142 ******** 2025-04-05 12:19:10.397658 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:10.399203 | orchestrator | 2025-04-05 12:19:10.399652 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:10.400098 | orchestrator | Saturday 05 April 2025 12:19:10 +0000 (0:00:00.184) 0:00:07.326 ******** 2025-04-05 12:19:10.581321 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:10.584390 | orchestrator | 2025-04-05 12:19:10.584655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:10.585101 | orchestrator | Saturday 05 April 2025 12:19:10 +0000 (0:00:00.183) 0:00:07.510 ******** 2025-04-05 12:19:10.771534 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:10.775544 | orchestrator | 2025-04-05 12:19:10.777028 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:10.777126 | orchestrator | Saturday 05 April 2025 12:19:10 +0000 (0:00:00.186) 0:00:07.697 ******** 2025-04-05 12:19:11.239703 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:11.239936 | orchestrator | 2025-04-05 12:19:11.239969 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:11.241935 | orchestrator | Saturday 05 April 2025 12:19:11 +0000 (0:00:00.471) 0:00:08.168 ******** 2025-04-05 12:19:11.429768 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:11.429969 | orchestrator | 2025-04-05 12:19:11.429994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:11.430013 | orchestrator | Saturday 05 April 2025 12:19:11 +0000 (0:00:00.188) 0:00:08.357 ******** 2025-04-05 12:19:11.592259 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:11.592564 | orchestrator | 2025-04-05 12:19:11.592860 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:11.594734 | orchestrator | Saturday 05 April 2025 12:19:11 +0000 (0:00:00.163) 0:00:08.520 ******** 2025-04-05 12:19:11.798656 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:11.799177 | orchestrator | 2025-04-05 12:19:11.803782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:11.803888 | orchestrator | Saturday 05 April 2025 12:19:11 +0000 (0:00:00.207) 0:00:08.728 ******** 2025-04-05 12:19:12.374111 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-04-05 12:19:12.374845 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-04-05 12:19:12.374931 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-04-05 12:19:12.375035 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-04-05 12:19:12.375442 | orchestrator | 2025-04-05 12:19:12.375700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:12.376046 | orchestrator | Saturday 05 April 2025 12:19:12 +0000 (0:00:00.575) 0:00:09.303 ******** 2025-04-05 12:19:12.565714 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:12.567190 | orchestrator | 2025-04-05 12:19:12.567228 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:12.567319 | orchestrator | Saturday 05 April 2025 12:19:12 +0000 (0:00:00.189) 0:00:09.493 ******** 2025-04-05 12:19:12.717319 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:12.717615 | orchestrator | 2025-04-05 12:19:12.717645 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:12.721104 | orchestrator | Saturday 05 April 2025 12:19:12 +0000 (0:00:00.151) 0:00:09.644 ******** 2025-04-05 12:19:12.867440 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:12.867755 | orchestrator | 2025-04-05 12:19:12.867831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:12.868262 | orchestrator | Saturday 05 April 2025 12:19:12 +0000 (0:00:00.152) 0:00:09.797 ******** 2025-04-05 12:19:13.051287 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:13.052218 | orchestrator | 2025-04-05 12:19:13.052480 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-05 12:19:13.052778 | orchestrator | Saturday 05 April 2025 12:19:13 +0000 (0:00:00.182) 0:00:09.979 ******** 2025-04-05 12:19:13.187081 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-04-05 12:19:13.187694 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-04-05 12:19:13.187823 | orchestrator | 2025-04-05 12:19:13.188056 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-05 12:19:13.188364 | orchestrator | Saturday 05 April 2025 12:19:13 +0000 (0:00:00.135) 0:00:10.114 ******** 2025-04-05 12:19:13.283056 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:13.285341 | orchestrator | 2025-04-05 12:19:13.507912 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-05 12:19:13.508029 | orchestrator | Saturday 05 April 2025 12:19:13 +0000 (0:00:00.097) 0:00:10.212 ******** 2025-04-05 12:19:13.508058 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:13.508115 | orchestrator | 2025-04-05 12:19:13.508642 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-05 12:19:13.509049 | orchestrator | Saturday 05 April 2025 12:19:13 +0000 (0:00:00.224) 0:00:10.437 ******** 2025-04-05 12:19:13.660121 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:13.661055 | orchestrator | 2025-04-05 12:19:13.661093 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-05 12:19:13.661152 | orchestrator | Saturday 05 April 2025 12:19:13 +0000 (0:00:00.150) 0:00:10.588 ******** 2025-04-05 12:19:13.781454 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:19:13.781587 | orchestrator | 2025-04-05 12:19:13.781959 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-05 12:19:13.782232 | orchestrator | Saturday 05 April 2025 12:19:13 +0000 (0:00:00.122) 0:00:10.710 ******** 2025-04-05 12:19:13.948996 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ad0d437a-29fb-56b5-bf7c-f26bd837f294'}}) 2025-04-05 12:19:13.949173 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'}}) 2025-04-05 12:19:13.949202 | orchestrator | 2025-04-05 12:19:13.950115 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-05 12:19:13.950314 | orchestrator | Saturday 05 April 2025 12:19:13 +0000 (0:00:00.166) 0:00:10.877 ******** 2025-04-05 12:19:14.077615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ad0d437a-29fb-56b5-bf7c-f26bd837f294'}})  2025-04-05 12:19:14.079298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'}})  2025-04-05 12:19:14.080438 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:14.080841 | orchestrator | 2025-04-05 12:19:14.081321 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-05 12:19:14.082877 | orchestrator | Saturday 05 April 2025 12:19:14 +0000 (0:00:00.125) 0:00:11.003 ******** 2025-04-05 12:19:14.210601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ad0d437a-29fb-56b5-bf7c-f26bd837f294'}})  2025-04-05 12:19:14.211012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'}})  2025-04-05 12:19:14.212514 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:14.355415 | orchestrator | 2025-04-05 12:19:14.355482 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-05 12:19:14.355497 | orchestrator | Saturday 05 April 2025 12:19:14 +0000 (0:00:00.132) 0:00:11.135 ******** 2025-04-05 12:19:14.355523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ad0d437a-29fb-56b5-bf7c-f26bd837f294'}})  2025-04-05 12:19:14.355859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'}})  2025-04-05 12:19:14.355893 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:14.356186 | orchestrator | 2025-04-05 12:19:14.356537 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-05 12:19:14.356889 | orchestrator | Saturday 05 April 2025 12:19:14 +0000 (0:00:00.149) 0:00:11.284 ******** 2025-04-05 12:19:14.487412 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:19:14.487647 | orchestrator | 2025-04-05 12:19:14.487874 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-05 12:19:14.487905 | orchestrator | Saturday 05 April 2025 12:19:14 +0000 (0:00:00.131) 0:00:11.415 ******** 2025-04-05 12:19:14.609315 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:19:14.609873 | orchestrator | 2025-04-05 12:19:14.610990 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-05 12:19:14.611092 | orchestrator | Saturday 05 April 2025 12:19:14 +0000 (0:00:00.121) 0:00:11.537 ******** 2025-04-05 12:19:14.730335 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:14.733042 | orchestrator | 2025-04-05 12:19:14.733680 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-05 12:19:14.734226 | orchestrator | Saturday 05 April 2025 12:19:14 +0000 (0:00:00.120) 0:00:11.657 ******** 2025-04-05 12:19:14.856576 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:14.857304 | orchestrator | 2025-04-05 12:19:14.859667 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-05 12:19:14.859774 | orchestrator | Saturday 05 April 2025 12:19:14 +0000 (0:00:00.127) 0:00:11.785 ******** 2025-04-05 12:19:14.977134 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:14.977751 | orchestrator | 2025-04-05 12:19:14.977781 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-05 12:19:14.978701 | orchestrator | Saturday 05 April 2025 12:19:14 +0000 (0:00:00.119) 0:00:11.904 ******** 2025-04-05 12:19:15.238271 | orchestrator | ok: [testbed-node-3] => { 2025-04-05 12:19:15.239036 | orchestrator |  "ceph_osd_devices": { 2025-04-05 12:19:15.241030 | orchestrator |  "sdb": { 2025-04-05 12:19:15.241060 | orchestrator |  "osd_lvm_uuid": "ad0d437a-29fb-56b5-bf7c-f26bd837f294" 2025-04-05 12:19:15.241244 | orchestrator |  }, 2025-04-05 12:19:15.241769 | orchestrator |  "sdc": { 2025-04-05 12:19:15.241863 | orchestrator |  "osd_lvm_uuid": "4ecef128-47ae-5e8f-9b67-b09b9dbd9f26" 2025-04-05 12:19:15.242188 | orchestrator |  } 2025-04-05 12:19:15.242443 | orchestrator |  } 2025-04-05 12:19:15.242683 | orchestrator | } 2025-04-05 12:19:15.242912 | orchestrator | 2025-04-05 12:19:15.243277 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-05 12:19:15.243433 | orchestrator | Saturday 05 April 2025 12:19:15 +0000 (0:00:00.260) 0:00:12.165 ******** 2025-04-05 12:19:15.352787 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:15.353832 | orchestrator | 2025-04-05 12:19:15.354248 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-05 12:19:15.354927 | orchestrator | Saturday 05 April 2025 12:19:15 +0000 (0:00:00.112) 0:00:12.278 ******** 2025-04-05 12:19:15.481721 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:15.481880 | orchestrator | 2025-04-05 12:19:15.482764 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-05 12:19:15.484220 | orchestrator | Saturday 05 April 2025 12:19:15 +0000 (0:00:00.131) 0:00:12.410 ******** 2025-04-05 12:19:15.611895 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:19:15.612421 | orchestrator | 2025-04-05 12:19:15.613157 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-05 12:19:15.614337 | orchestrator | Saturday 05 April 2025 12:19:15 +0000 (0:00:00.127) 0:00:12.538 ******** 2025-04-05 12:19:15.822074 | orchestrator | changed: [testbed-node-3] => { 2025-04-05 12:19:15.822461 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-05 12:19:15.822663 | orchestrator |  "ceph_osd_devices": { 2025-04-05 12:19:15.823089 | orchestrator |  "sdb": { 2025-04-05 12:19:15.823829 | orchestrator |  "osd_lvm_uuid": "ad0d437a-29fb-56b5-bf7c-f26bd837f294" 2025-04-05 12:19:15.823918 | orchestrator |  }, 2025-04-05 12:19:15.824103 | orchestrator |  "sdc": { 2025-04-05 12:19:15.826602 | orchestrator |  "osd_lvm_uuid": "4ecef128-47ae-5e8f-9b67-b09b9dbd9f26" 2025-04-05 12:19:15.828154 | orchestrator |  } 2025-04-05 12:19:15.829288 | orchestrator |  }, 2025-04-05 12:19:15.830848 | orchestrator |  "lvm_volumes": [ 2025-04-05 12:19:15.831912 | orchestrator |  { 2025-04-05 12:19:15.832594 | orchestrator |  "data": "osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294", 2025-04-05 12:19:15.833336 | orchestrator |  "data_vg": "ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294" 2025-04-05 12:19:15.833886 | orchestrator |  }, 2025-04-05 12:19:15.834380 | orchestrator |  { 2025-04-05 12:19:15.834863 | orchestrator |  "data": "osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26", 2025-04-05 12:19:15.837931 | orchestrator |  "data_vg": "ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26" 2025-04-05 12:19:15.838201 | orchestrator |  } 2025-04-05 12:19:15.838229 | orchestrator |  ] 2025-04-05 12:19:15.838245 | orchestrator |  } 2025-04-05 12:19:15.838260 | orchestrator | } 2025-04-05 12:19:15.838276 | orchestrator | 2025-04-05 12:19:15.838297 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-05 12:19:15.839175 | orchestrator | Saturday 05 April 2025 12:19:15 +0000 (0:00:00.211) 0:00:12.750 ******** 2025-04-05 12:19:17.612877 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-05 12:19:17.616346 | orchestrator | 2025-04-05 12:19:17.616392 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-05 12:19:17.618392 | orchestrator | 2025-04-05 12:19:17.911589 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-05 12:19:17.911648 | orchestrator | Saturday 05 April 2025 12:19:17 +0000 (0:00:01.789) 0:00:14.539 ******** 2025-04-05 12:19:17.911673 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-05 12:19:17.914146 | orchestrator | 2025-04-05 12:19:17.914926 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-05 12:19:17.916058 | orchestrator | Saturday 05 April 2025 12:19:17 +0000 (0:00:00.299) 0:00:14.838 ******** 2025-04-05 12:19:18.140957 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:19:18.141130 | orchestrator | 2025-04-05 12:19:18.141417 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:18.141845 | orchestrator | Saturday 05 April 2025 12:19:18 +0000 (0:00:00.227) 0:00:15.066 ******** 2025-04-05 12:19:18.516730 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-04-05 12:19:18.517153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-04-05 12:19:18.517446 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-04-05 12:19:18.518417 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-04-05 12:19:18.518876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-04-05 12:19:18.518906 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-04-05 12:19:18.519215 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-04-05 12:19:18.519867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-04-05 12:19:18.520704 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-04-05 12:19:18.520829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-04-05 12:19:18.520856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-04-05 12:19:18.521248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-04-05 12:19:18.521653 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-04-05 12:19:18.522062 | orchestrator | 2025-04-05 12:19:18.522326 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:18.523361 | orchestrator | Saturday 05 April 2025 12:19:18 +0000 (0:00:00.379) 0:00:15.445 ******** 2025-04-05 12:19:18.704399 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:18.705408 | orchestrator | 2025-04-05 12:19:18.707445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:18.710920 | orchestrator | Saturday 05 April 2025 12:19:18 +0000 (0:00:00.186) 0:00:15.632 ******** 2025-04-05 12:19:18.891234 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:18.892817 | orchestrator | 2025-04-05 12:19:18.894294 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:18.896674 | orchestrator | Saturday 05 April 2025 12:19:18 +0000 (0:00:00.186) 0:00:15.819 ******** 2025-04-05 12:19:19.073009 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:19.074379 | orchestrator | 2025-04-05 12:19:19.075120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:19.076526 | orchestrator | Saturday 05 April 2025 12:19:19 +0000 (0:00:00.181) 0:00:16.000 ******** 2025-04-05 12:19:19.263927 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:19.265718 | orchestrator | 2025-04-05 12:19:19.267205 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:19.268338 | orchestrator | Saturday 05 April 2025 12:19:19 +0000 (0:00:00.189) 0:00:16.190 ******** 2025-04-05 12:19:19.680455 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:19.680740 | orchestrator | 2025-04-05 12:19:19.681446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:19.681905 | orchestrator | Saturday 05 April 2025 12:19:19 +0000 (0:00:00.418) 0:00:16.609 ******** 2025-04-05 12:19:19.862149 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:19.862835 | orchestrator | 2025-04-05 12:19:19.863430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:19.864279 | orchestrator | Saturday 05 April 2025 12:19:19 +0000 (0:00:00.176) 0:00:16.785 ******** 2025-04-05 12:19:20.014962 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:20.015488 | orchestrator | 2025-04-05 12:19:20.015744 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:20.016624 | orchestrator | Saturday 05 April 2025 12:19:20 +0000 (0:00:00.158) 0:00:16.944 ******** 2025-04-05 12:19:20.187279 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:20.188575 | orchestrator | 2025-04-05 12:19:20.191962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:20.195310 | orchestrator | Saturday 05 April 2025 12:19:20 +0000 (0:00:00.172) 0:00:17.116 ******** 2025-04-05 12:19:20.598633 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03) 2025-04-05 12:19:20.601954 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03) 2025-04-05 12:19:20.601988 | orchestrator | 2025-04-05 12:19:20.602011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:20.602125 | orchestrator | Saturday 05 April 2025 12:19:20 +0000 (0:00:00.410) 0:00:17.526 ******** 2025-04-05 12:19:21.000310 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5d2b1a52-3655-4f66-b4c6-42f0360176a6) 2025-04-05 12:19:21.004567 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5d2b1a52-3655-4f66-b4c6-42f0360176a6) 2025-04-05 12:19:21.004866 | orchestrator | 2025-04-05 12:19:21.006181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:21.007500 | orchestrator | Saturday 05 April 2025 12:19:20 +0000 (0:00:00.400) 0:00:17.927 ******** 2025-04-05 12:19:21.395145 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ba8d5f0c-914f-4739-9d89-312c5c9b23ff) 2025-04-05 12:19:21.400117 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ba8d5f0c-914f-4739-9d89-312c5c9b23ff) 2025-04-05 12:19:21.400975 | orchestrator | 2025-04-05 12:19:21.401756 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:21.402283 | orchestrator | Saturday 05 April 2025 12:19:21 +0000 (0:00:00.396) 0:00:18.323 ******** 2025-04-05 12:19:21.798534 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cfed707b-504f-4ce7-a138-034721a1d783) 2025-04-05 12:19:21.799941 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cfed707b-504f-4ce7-a138-034721a1d783) 2025-04-05 12:19:21.801634 | orchestrator | 2025-04-05 12:19:21.803376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:21.805869 | orchestrator | Saturday 05 April 2025 12:19:21 +0000 (0:00:00.403) 0:00:18.726 ******** 2025-04-05 12:19:22.114088 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-05 12:19:22.114245 | orchestrator | 2025-04-05 12:19:22.114593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:22.115353 | orchestrator | Saturday 05 April 2025 12:19:22 +0000 (0:00:00.316) 0:00:19.043 ******** 2025-04-05 12:19:22.557633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-04-05 12:19:22.558897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-04-05 12:19:22.559117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-04-05 12:19:22.559143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-04-05 12:19:22.559162 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-04-05 12:19:22.559533 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-04-05 12:19:22.559995 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-04-05 12:19:22.560313 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-04-05 12:19:22.560756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-04-05 12:19:22.561317 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-04-05 12:19:22.561705 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-04-05 12:19:22.562614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-04-05 12:19:22.562685 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-04-05 12:19:22.563049 | orchestrator | 2025-04-05 12:19:22.563424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:22.563844 | orchestrator | Saturday 05 April 2025 12:19:22 +0000 (0:00:00.442) 0:00:19.485 ******** 2025-04-05 12:19:22.735172 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:22.735407 | orchestrator | 2025-04-05 12:19:22.735910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:22.736299 | orchestrator | Saturday 05 April 2025 12:19:22 +0000 (0:00:00.175) 0:00:19.661 ******** 2025-04-05 12:19:22.933228 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:22.934179 | orchestrator | 2025-04-05 12:19:22.934477 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:22.936536 | orchestrator | Saturday 05 April 2025 12:19:22 +0000 (0:00:00.200) 0:00:19.861 ******** 2025-04-05 12:19:23.109063 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:23.109336 | orchestrator | 2025-04-05 12:19:23.111183 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:23.112192 | orchestrator | Saturday 05 April 2025 12:19:23 +0000 (0:00:00.176) 0:00:20.037 ******** 2025-04-05 12:19:23.296151 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:23.296521 | orchestrator | 2025-04-05 12:19:23.297326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:23.298055 | orchestrator | Saturday 05 April 2025 12:19:23 +0000 (0:00:00.187) 0:00:20.224 ******** 2025-04-05 12:19:23.476399 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:23.477088 | orchestrator | 2025-04-05 12:19:23.477956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:23.478773 | orchestrator | Saturday 05 April 2025 12:19:23 +0000 (0:00:00.180) 0:00:20.404 ******** 2025-04-05 12:19:23.667032 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:23.668019 | orchestrator | 2025-04-05 12:19:23.668910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:23.669536 | orchestrator | Saturday 05 April 2025 12:19:23 +0000 (0:00:00.190) 0:00:20.595 ******** 2025-04-05 12:19:23.848171 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:23.848820 | orchestrator | 2025-04-05 12:19:23.851832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:24.026250 | orchestrator | Saturday 05 April 2025 12:19:23 +0000 (0:00:00.180) 0:00:20.776 ******** 2025-04-05 12:19:24.026313 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:24.027205 | orchestrator | 2025-04-05 12:19:24.028428 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:24.028656 | orchestrator | Saturday 05 April 2025 12:19:24 +0000 (0:00:00.177) 0:00:20.953 ******** 2025-04-05 12:19:24.918983 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-04-05 12:19:24.919200 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-04-05 12:19:24.919262 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-04-05 12:19:24.919374 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-04-05 12:19:24.920161 | orchestrator | 2025-04-05 12:19:24.920644 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:24.921711 | orchestrator | Saturday 05 April 2025 12:19:24 +0000 (0:00:00.893) 0:00:21.847 ******** 2025-04-05 12:19:25.406286 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:25.407606 | orchestrator | 2025-04-05 12:19:25.408620 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:25.411578 | orchestrator | Saturday 05 April 2025 12:19:25 +0000 (0:00:00.486) 0:00:22.333 ******** 2025-04-05 12:19:25.601449 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:25.601837 | orchestrator | 2025-04-05 12:19:25.605096 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:25.605196 | orchestrator | Saturday 05 April 2025 12:19:25 +0000 (0:00:00.195) 0:00:22.529 ******** 2025-04-05 12:19:25.800590 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:25.800954 | orchestrator | 2025-04-05 12:19:25.801858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:25.802182 | orchestrator | Saturday 05 April 2025 12:19:25 +0000 (0:00:00.199) 0:00:22.728 ******** 2025-04-05 12:19:26.015160 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:26.015288 | orchestrator | 2025-04-05 12:19:26.015378 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-05 12:19:26.016061 | orchestrator | Saturday 05 April 2025 12:19:26 +0000 (0:00:00.210) 0:00:22.939 ******** 2025-04-05 12:19:26.194679 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-04-05 12:19:26.195298 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-04-05 12:19:26.195585 | orchestrator | 2025-04-05 12:19:26.196280 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-05 12:19:26.196929 | orchestrator | Saturday 05 April 2025 12:19:26 +0000 (0:00:00.183) 0:00:23.122 ******** 2025-04-05 12:19:26.324409 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:26.327317 | orchestrator | 2025-04-05 12:19:26.328090 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-05 12:19:26.328122 | orchestrator | Saturday 05 April 2025 12:19:26 +0000 (0:00:00.129) 0:00:23.252 ******** 2025-04-05 12:19:26.466353 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:26.467608 | orchestrator | 2025-04-05 12:19:26.468649 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-05 12:19:26.469274 | orchestrator | Saturday 05 April 2025 12:19:26 +0000 (0:00:00.140) 0:00:23.393 ******** 2025-04-05 12:19:26.606754 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:26.609547 | orchestrator | 2025-04-05 12:19:26.610485 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-05 12:19:26.610517 | orchestrator | Saturday 05 April 2025 12:19:26 +0000 (0:00:00.141) 0:00:23.534 ******** 2025-04-05 12:19:26.750302 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:19:26.751494 | orchestrator | 2025-04-05 12:19:26.754759 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-05 12:19:26.938735 | orchestrator | Saturday 05 April 2025 12:19:26 +0000 (0:00:00.143) 0:00:23.678 ******** 2025-04-05 12:19:26.938895 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'eb474160-46dc-5c48-a12b-143126b3371a'}}) 2025-04-05 12:19:26.939653 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bddbd264-0785-5bf3-9ea2-553c515bd099'}}) 2025-04-05 12:19:26.940946 | orchestrator | 2025-04-05 12:19:26.941980 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-05 12:19:26.943143 | orchestrator | Saturday 05 April 2025 12:19:26 +0000 (0:00:00.186) 0:00:23.865 ******** 2025-04-05 12:19:27.093483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'eb474160-46dc-5c48-a12b-143126b3371a'}})  2025-04-05 12:19:27.093642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bddbd264-0785-5bf3-9ea2-553c515bd099'}})  2025-04-05 12:19:27.097133 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:27.097952 | orchestrator | 2025-04-05 12:19:27.098903 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-05 12:19:27.099667 | orchestrator | Saturday 05 April 2025 12:19:27 +0000 (0:00:00.153) 0:00:24.019 ******** 2025-04-05 12:19:27.249623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'eb474160-46dc-5c48-a12b-143126b3371a'}})  2025-04-05 12:19:27.253699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bddbd264-0785-5bf3-9ea2-553c515bd099'}})  2025-04-05 12:19:27.253928 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:27.253956 | orchestrator | 2025-04-05 12:19:27.253978 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-05 12:19:27.255021 | orchestrator | Saturday 05 April 2025 12:19:27 +0000 (0:00:00.158) 0:00:24.177 ******** 2025-04-05 12:19:27.518433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'eb474160-46dc-5c48-a12b-143126b3371a'}})  2025-04-05 12:19:27.519071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bddbd264-0785-5bf3-9ea2-553c515bd099'}})  2025-04-05 12:19:27.519992 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:27.520749 | orchestrator | 2025-04-05 12:19:27.521151 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-05 12:19:27.521819 | orchestrator | Saturday 05 April 2025 12:19:27 +0000 (0:00:00.264) 0:00:24.441 ******** 2025-04-05 12:19:27.658072 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:19:27.659011 | orchestrator | 2025-04-05 12:19:27.660537 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-05 12:19:27.660566 | orchestrator | Saturday 05 April 2025 12:19:27 +0000 (0:00:00.141) 0:00:24.583 ******** 2025-04-05 12:19:27.796823 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:19:27.797587 | orchestrator | 2025-04-05 12:19:27.798423 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-05 12:19:27.799607 | orchestrator | Saturday 05 April 2025 12:19:27 +0000 (0:00:00.140) 0:00:24.724 ******** 2025-04-05 12:19:27.936144 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:27.937271 | orchestrator | 2025-04-05 12:19:27.938158 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-05 12:19:27.939279 | orchestrator | Saturday 05 April 2025 12:19:27 +0000 (0:00:00.139) 0:00:24.863 ******** 2025-04-05 12:19:28.083910 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:28.084435 | orchestrator | 2025-04-05 12:19:28.084468 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-05 12:19:28.084772 | orchestrator | Saturday 05 April 2025 12:19:28 +0000 (0:00:00.147) 0:00:25.011 ******** 2025-04-05 12:19:28.201851 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:28.202732 | orchestrator | 2025-04-05 12:19:28.203635 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-05 12:19:28.208220 | orchestrator | Saturday 05 April 2025 12:19:28 +0000 (0:00:00.118) 0:00:25.129 ******** 2025-04-05 12:19:28.343162 | orchestrator | ok: [testbed-node-4] => { 2025-04-05 12:19:28.343954 | orchestrator |  "ceph_osd_devices": { 2025-04-05 12:19:28.345260 | orchestrator |  "sdb": { 2025-04-05 12:19:28.349891 | orchestrator |  "osd_lvm_uuid": "eb474160-46dc-5c48-a12b-143126b3371a" 2025-04-05 12:19:28.350306 | orchestrator |  }, 2025-04-05 12:19:28.351762 | orchestrator |  "sdc": { 2025-04-05 12:19:28.352717 | orchestrator |  "osd_lvm_uuid": "bddbd264-0785-5bf3-9ea2-553c515bd099" 2025-04-05 12:19:28.353563 | orchestrator |  } 2025-04-05 12:19:28.354201 | orchestrator |  } 2025-04-05 12:19:28.354616 | orchestrator | } 2025-04-05 12:19:28.355433 | orchestrator | 2025-04-05 12:19:28.356052 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-05 12:19:28.356604 | orchestrator | Saturday 05 April 2025 12:19:28 +0000 (0:00:00.141) 0:00:25.270 ******** 2025-04-05 12:19:28.479871 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:28.480668 | orchestrator | 2025-04-05 12:19:28.481366 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-05 12:19:28.482407 | orchestrator | Saturday 05 April 2025 12:19:28 +0000 (0:00:00.137) 0:00:25.408 ******** 2025-04-05 12:19:28.606738 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:28.608692 | orchestrator | 2025-04-05 12:19:28.610191 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-05 12:19:28.610712 | orchestrator | Saturday 05 April 2025 12:19:28 +0000 (0:00:00.126) 0:00:25.534 ******** 2025-04-05 12:19:28.734444 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:19:28.734656 | orchestrator | 2025-04-05 12:19:28.735313 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-05 12:19:28.735735 | orchestrator | Saturday 05 April 2025 12:19:28 +0000 (0:00:00.128) 0:00:25.663 ******** 2025-04-05 12:19:29.167431 | orchestrator | changed: [testbed-node-4] => { 2025-04-05 12:19:29.167542 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-05 12:19:29.168190 | orchestrator |  "ceph_osd_devices": { 2025-04-05 12:19:29.168590 | orchestrator |  "sdb": { 2025-04-05 12:19:29.169406 | orchestrator |  "osd_lvm_uuid": "eb474160-46dc-5c48-a12b-143126b3371a" 2025-04-05 12:19:29.169674 | orchestrator |  }, 2025-04-05 12:19:29.170310 | orchestrator |  "sdc": { 2025-04-05 12:19:29.170828 | orchestrator |  "osd_lvm_uuid": "bddbd264-0785-5bf3-9ea2-553c515bd099" 2025-04-05 12:19:29.171313 | orchestrator |  } 2025-04-05 12:19:29.171713 | orchestrator |  }, 2025-04-05 12:19:29.172371 | orchestrator |  "lvm_volumes": [ 2025-04-05 12:19:29.172704 | orchestrator |  { 2025-04-05 12:19:29.173100 | orchestrator |  "data": "osd-block-eb474160-46dc-5c48-a12b-143126b3371a", 2025-04-05 12:19:29.173496 | orchestrator |  "data_vg": "ceph-eb474160-46dc-5c48-a12b-143126b3371a" 2025-04-05 12:19:29.174133 | orchestrator |  }, 2025-04-05 12:19:29.174600 | orchestrator |  { 2025-04-05 12:19:29.175129 | orchestrator |  "data": "osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099", 2025-04-05 12:19:29.175565 | orchestrator |  "data_vg": "ceph-bddbd264-0785-5bf3-9ea2-553c515bd099" 2025-04-05 12:19:29.175982 | orchestrator |  } 2025-04-05 12:19:29.176401 | orchestrator |  ] 2025-04-05 12:19:29.176868 | orchestrator |  } 2025-04-05 12:19:29.177410 | orchestrator | } 2025-04-05 12:19:29.177914 | orchestrator | 2025-04-05 12:19:29.178236 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-05 12:19:29.178701 | orchestrator | Saturday 05 April 2025 12:19:29 +0000 (0:00:00.431) 0:00:26.094 ******** 2025-04-05 12:19:30.408949 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-05 12:19:30.409679 | orchestrator | 2025-04-05 12:19:30.409762 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-05 12:19:30.409785 | orchestrator | 2025-04-05 12:19:30.410102 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-05 12:19:30.410468 | orchestrator | Saturday 05 April 2025 12:19:30 +0000 (0:00:01.239) 0:00:27.335 ******** 2025-04-05 12:19:30.631536 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-05 12:19:30.632000 | orchestrator | 2025-04-05 12:19:30.632034 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-05 12:19:30.632416 | orchestrator | Saturday 05 April 2025 12:19:30 +0000 (0:00:00.224) 0:00:27.559 ******** 2025-04-05 12:19:30.849669 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:19:30.849883 | orchestrator | 2025-04-05 12:19:30.850580 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:30.851608 | orchestrator | Saturday 05 April 2025 12:19:30 +0000 (0:00:00.217) 0:00:27.777 ******** 2025-04-05 12:19:31.301497 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-04-05 12:19:31.302889 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-04-05 12:19:31.302925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-04-05 12:19:31.302948 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-04-05 12:19:31.303981 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-04-05 12:19:31.304408 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-04-05 12:19:31.305096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-04-05 12:19:31.305896 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-04-05 12:19:31.306364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-04-05 12:19:31.307044 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-04-05 12:19:31.307332 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-04-05 12:19:31.307842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-04-05 12:19:31.308296 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-04-05 12:19:31.308745 | orchestrator | 2025-04-05 12:19:31.309544 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:31.309940 | orchestrator | Saturday 05 April 2025 12:19:31 +0000 (0:00:00.447) 0:00:28.224 ******** 2025-04-05 12:19:31.479570 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:31.483731 | orchestrator | 2025-04-05 12:19:31.671923 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:31.671984 | orchestrator | Saturday 05 April 2025 12:19:31 +0000 (0:00:00.182) 0:00:28.407 ******** 2025-04-05 12:19:31.672009 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:31.673286 | orchestrator | 2025-04-05 12:19:31.675760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:31.677264 | orchestrator | Saturday 05 April 2025 12:19:31 +0000 (0:00:00.190) 0:00:28.598 ******** 2025-04-05 12:19:31.851264 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:31.852374 | orchestrator | 2025-04-05 12:19:31.852398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:31.852414 | orchestrator | Saturday 05 April 2025 12:19:31 +0000 (0:00:00.178) 0:00:28.776 ******** 2025-04-05 12:19:32.041880 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:32.043713 | orchestrator | 2025-04-05 12:19:32.044891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:32.236105 | orchestrator | Saturday 05 April 2025 12:19:32 +0000 (0:00:00.192) 0:00:28.969 ******** 2025-04-05 12:19:32.236184 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:32.236604 | orchestrator | 2025-04-05 12:19:32.237609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:32.241336 | orchestrator | Saturday 05 April 2025 12:19:32 +0000 (0:00:00.193) 0:00:29.162 ******** 2025-04-05 12:19:32.445054 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:32.446070 | orchestrator | 2025-04-05 12:19:32.446819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:32.450208 | orchestrator | Saturday 05 April 2025 12:19:32 +0000 (0:00:00.209) 0:00:29.372 ******** 2025-04-05 12:19:32.636706 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:32.637767 | orchestrator | 2025-04-05 12:19:32.637820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:32.638904 | orchestrator | Saturday 05 April 2025 12:19:32 +0000 (0:00:00.184) 0:00:29.557 ******** 2025-04-05 12:19:32.813282 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:33.364113 | orchestrator | 2025-04-05 12:19:33.364192 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:33.364205 | orchestrator | Saturday 05 April 2025 12:19:32 +0000 (0:00:00.180) 0:00:29.737 ******** 2025-04-05 12:19:33.364227 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f) 2025-04-05 12:19:33.365765 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f) 2025-04-05 12:19:33.367135 | orchestrator | 2025-04-05 12:19:33.368201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:33.369394 | orchestrator | Saturday 05 April 2025 12:19:33 +0000 (0:00:00.552) 0:00:30.290 ******** 2025-04-05 12:19:33.918859 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3319eb17-1f94-4384-b4eb-d4656240927c) 2025-04-05 12:19:33.920343 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3319eb17-1f94-4384-b4eb-d4656240927c) 2025-04-05 12:19:33.921253 | orchestrator | 2025-04-05 12:19:33.924627 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:33.925294 | orchestrator | Saturday 05 April 2025 12:19:33 +0000 (0:00:00.556) 0:00:30.846 ******** 2025-04-05 12:19:34.553451 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1b7be43a-8a0c-4734-8b26-2b6a058e961f) 2025-04-05 12:19:34.553591 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1b7be43a-8a0c-4734-8b26-2b6a058e961f) 2025-04-05 12:19:34.553959 | orchestrator | 2025-04-05 12:19:34.554285 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:34.554684 | orchestrator | Saturday 05 April 2025 12:19:34 +0000 (0:00:00.634) 0:00:31.481 ******** 2025-04-05 12:19:34.972133 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_af9ec2c6-8790-4d7b-8704-1ac1d2bb5c9f) 2025-04-05 12:19:34.972937 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_af9ec2c6-8790-4d7b-8704-1ac1d2bb5c9f) 2025-04-05 12:19:34.974510 | orchestrator | 2025-04-05 12:19:34.977620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:19:34.978100 | orchestrator | Saturday 05 April 2025 12:19:34 +0000 (0:00:00.420) 0:00:31.901 ******** 2025-04-05 12:19:35.336354 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-05 12:19:35.337315 | orchestrator | 2025-04-05 12:19:35.337493 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:35.338915 | orchestrator | Saturday 05 April 2025 12:19:35 +0000 (0:00:00.361) 0:00:32.263 ******** 2025-04-05 12:19:35.756119 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-04-05 12:19:35.756697 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-04-05 12:19:35.756726 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-04-05 12:19:35.756741 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-04-05 12:19:35.756761 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-04-05 12:19:35.757210 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-04-05 12:19:35.757704 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-04-05 12:19:35.758343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-04-05 12:19:35.760610 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-04-05 12:19:35.761210 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-04-05 12:19:35.761237 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-04-05 12:19:35.761251 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-04-05 12:19:35.761266 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-04-05 12:19:35.761284 | orchestrator | 2025-04-05 12:19:35.761754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:35.762416 | orchestrator | Saturday 05 April 2025 12:19:35 +0000 (0:00:00.419) 0:00:32.682 ******** 2025-04-05 12:19:35.986973 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:35.987165 | orchestrator | 2025-04-05 12:19:35.987193 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:35.987215 | orchestrator | Saturday 05 April 2025 12:19:35 +0000 (0:00:00.230) 0:00:32.912 ******** 2025-04-05 12:19:36.180565 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:36.181869 | orchestrator | 2025-04-05 12:19:36.182117 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:36.182601 | orchestrator | Saturday 05 April 2025 12:19:36 +0000 (0:00:00.194) 0:00:33.107 ******** 2025-04-05 12:19:36.373189 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:36.374670 | orchestrator | 2025-04-05 12:19:36.375863 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:36.376564 | orchestrator | Saturday 05 April 2025 12:19:36 +0000 (0:00:00.193) 0:00:33.300 ******** 2025-04-05 12:19:36.569625 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:36.570619 | orchestrator | 2025-04-05 12:19:36.573535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:36.574538 | orchestrator | Saturday 05 April 2025 12:19:36 +0000 (0:00:00.196) 0:00:33.497 ******** 2025-04-05 12:19:36.761442 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:36.762180 | orchestrator | 2025-04-05 12:19:36.762870 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:36.765056 | orchestrator | Saturday 05 April 2025 12:19:36 +0000 (0:00:00.192) 0:00:33.690 ******** 2025-04-05 12:19:37.228989 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:37.229361 | orchestrator | 2025-04-05 12:19:37.230497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:37.230563 | orchestrator | Saturday 05 April 2025 12:19:37 +0000 (0:00:00.466) 0:00:34.156 ******** 2025-04-05 12:19:37.410263 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:37.410582 | orchestrator | 2025-04-05 12:19:37.411140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:37.411962 | orchestrator | Saturday 05 April 2025 12:19:37 +0000 (0:00:00.182) 0:00:34.339 ******** 2025-04-05 12:19:37.602979 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:37.603127 | orchestrator | 2025-04-05 12:19:37.603905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:37.604321 | orchestrator | Saturday 05 April 2025 12:19:37 +0000 (0:00:00.192) 0:00:34.531 ******** 2025-04-05 12:19:38.186909 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-04-05 12:19:38.187129 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-04-05 12:19:38.187548 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-04-05 12:19:38.188025 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-04-05 12:19:38.188325 | orchestrator | 2025-04-05 12:19:38.188596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:38.190955 | orchestrator | Saturday 05 April 2025 12:19:38 +0000 (0:00:00.583) 0:00:35.115 ******** 2025-04-05 12:19:38.371103 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:38.371701 | orchestrator | 2025-04-05 12:19:38.371727 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:38.371745 | orchestrator | Saturday 05 April 2025 12:19:38 +0000 (0:00:00.183) 0:00:35.299 ******** 2025-04-05 12:19:38.562210 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:38.563384 | orchestrator | 2025-04-05 12:19:38.565166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:38.745684 | orchestrator | Saturday 05 April 2025 12:19:38 +0000 (0:00:00.191) 0:00:35.490 ******** 2025-04-05 12:19:38.745737 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:38.747942 | orchestrator | 2025-04-05 12:19:38.748308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:19:38.748332 | orchestrator | Saturday 05 April 2025 12:19:38 +0000 (0:00:00.183) 0:00:35.674 ******** 2025-04-05 12:19:38.935412 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:38.936631 | orchestrator | 2025-04-05 12:19:38.936718 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-05 12:19:38.938756 | orchestrator | Saturday 05 April 2025 12:19:38 +0000 (0:00:00.189) 0:00:35.864 ******** 2025-04-05 12:19:39.095248 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-04-05 12:19:39.099017 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-04-05 12:19:39.099229 | orchestrator | 2025-04-05 12:19:39.099942 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-05 12:19:39.099967 | orchestrator | Saturday 05 April 2025 12:19:39 +0000 (0:00:00.159) 0:00:36.023 ******** 2025-04-05 12:19:39.229310 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:39.230287 | orchestrator | 2025-04-05 12:19:39.231363 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-05 12:19:39.232835 | orchestrator | Saturday 05 April 2025 12:19:39 +0000 (0:00:00.133) 0:00:36.157 ******** 2025-04-05 12:19:39.360072 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:39.360596 | orchestrator | 2025-04-05 12:19:39.361376 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-05 12:19:39.362364 | orchestrator | Saturday 05 April 2025 12:19:39 +0000 (0:00:00.131) 0:00:36.288 ******** 2025-04-05 12:19:39.610100 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:39.610241 | orchestrator | 2025-04-05 12:19:39.611438 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-05 12:19:39.612216 | orchestrator | Saturday 05 April 2025 12:19:39 +0000 (0:00:00.247) 0:00:36.536 ******** 2025-04-05 12:19:39.738599 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:19:39.739386 | orchestrator | 2025-04-05 12:19:39.740320 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-05 12:19:39.740711 | orchestrator | Saturday 05 April 2025 12:19:39 +0000 (0:00:00.131) 0:00:36.667 ******** 2025-04-05 12:19:39.905452 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4aac11a6-844c-526d-9ac8-c50cbafa4162'}}) 2025-04-05 12:19:39.907307 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7b2d6610-beab-5485-bcb7-dfee77450e0c'}}) 2025-04-05 12:19:39.907572 | orchestrator | 2025-04-05 12:19:39.908950 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-05 12:19:39.909736 | orchestrator | Saturday 05 April 2025 12:19:39 +0000 (0:00:00.166) 0:00:36.833 ******** 2025-04-05 12:19:40.056616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4aac11a6-844c-526d-9ac8-c50cbafa4162'}})  2025-04-05 12:19:40.057962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7b2d6610-beab-5485-bcb7-dfee77450e0c'}})  2025-04-05 12:19:40.059491 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:40.060362 | orchestrator | 2025-04-05 12:19:40.060392 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-05 12:19:40.060415 | orchestrator | Saturday 05 April 2025 12:19:40 +0000 (0:00:00.151) 0:00:36.985 ******** 2025-04-05 12:19:40.222989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4aac11a6-844c-526d-9ac8-c50cbafa4162'}})  2025-04-05 12:19:40.223658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7b2d6610-beab-5485-bcb7-dfee77450e0c'}})  2025-04-05 12:19:40.225096 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:40.225736 | orchestrator | 2025-04-05 12:19:40.227009 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-05 12:19:40.227770 | orchestrator | Saturday 05 April 2025 12:19:40 +0000 (0:00:00.164) 0:00:37.149 ******** 2025-04-05 12:19:40.392454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4aac11a6-844c-526d-9ac8-c50cbafa4162'}})  2025-04-05 12:19:40.393219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7b2d6610-beab-5485-bcb7-dfee77450e0c'}})  2025-04-05 12:19:40.394455 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:40.396990 | orchestrator | 2025-04-05 12:19:40.566392 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-05 12:19:40.566473 | orchestrator | Saturday 05 April 2025 12:19:40 +0000 (0:00:00.170) 0:00:37.320 ******** 2025-04-05 12:19:40.566501 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:19:40.566937 | orchestrator | 2025-04-05 12:19:40.567924 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-05 12:19:40.568865 | orchestrator | Saturday 05 April 2025 12:19:40 +0000 (0:00:00.174) 0:00:37.494 ******** 2025-04-05 12:19:40.710891 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:19:40.711714 | orchestrator | 2025-04-05 12:19:40.712816 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-05 12:19:40.715318 | orchestrator | Saturday 05 April 2025 12:19:40 +0000 (0:00:00.144) 0:00:37.638 ******** 2025-04-05 12:19:40.843543 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:40.843996 | orchestrator | 2025-04-05 12:19:40.845173 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-05 12:19:40.845995 | orchestrator | Saturday 05 April 2025 12:19:40 +0000 (0:00:00.132) 0:00:37.771 ******** 2025-04-05 12:19:40.974964 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:40.975582 | orchestrator | 2025-04-05 12:19:40.976585 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-05 12:19:40.978666 | orchestrator | Saturday 05 April 2025 12:19:40 +0000 (0:00:00.131) 0:00:37.903 ******** 2025-04-05 12:19:41.102309 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:41.102716 | orchestrator | 2025-04-05 12:19:41.103973 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-05 12:19:41.104781 | orchestrator | Saturday 05 April 2025 12:19:41 +0000 (0:00:00.127) 0:00:38.030 ******** 2025-04-05 12:19:41.245599 | orchestrator | ok: [testbed-node-5] => { 2025-04-05 12:19:41.245752 | orchestrator |  "ceph_osd_devices": { 2025-04-05 12:19:41.246826 | orchestrator |  "sdb": { 2025-04-05 12:19:41.248412 | orchestrator |  "osd_lvm_uuid": "4aac11a6-844c-526d-9ac8-c50cbafa4162" 2025-04-05 12:19:41.249146 | orchestrator |  }, 2025-04-05 12:19:41.249708 | orchestrator |  "sdc": { 2025-04-05 12:19:41.250606 | orchestrator |  "osd_lvm_uuid": "7b2d6610-beab-5485-bcb7-dfee77450e0c" 2025-04-05 12:19:41.251223 | orchestrator |  } 2025-04-05 12:19:41.251615 | orchestrator |  } 2025-04-05 12:19:41.252498 | orchestrator | } 2025-04-05 12:19:41.253296 | orchestrator | 2025-04-05 12:19:41.253562 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-05 12:19:41.254216 | orchestrator | Saturday 05 April 2025 12:19:41 +0000 (0:00:00.141) 0:00:38.171 ******** 2025-04-05 12:19:41.550300 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:41.550588 | orchestrator | 2025-04-05 12:19:41.551275 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-05 12:19:41.552166 | orchestrator | Saturday 05 April 2025 12:19:41 +0000 (0:00:00.306) 0:00:38.478 ******** 2025-04-05 12:19:41.683584 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:41.684974 | orchestrator | 2025-04-05 12:19:41.685739 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-05 12:19:41.686589 | orchestrator | Saturday 05 April 2025 12:19:41 +0000 (0:00:00.133) 0:00:38.611 ******** 2025-04-05 12:19:41.837442 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:19:41.837922 | orchestrator | 2025-04-05 12:19:41.838583 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-05 12:19:41.839298 | orchestrator | Saturday 05 April 2025 12:19:41 +0000 (0:00:00.152) 0:00:38.764 ******** 2025-04-05 12:19:42.093406 | orchestrator | changed: [testbed-node-5] => { 2025-04-05 12:19:42.093876 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-05 12:19:42.094961 | orchestrator |  "ceph_osd_devices": { 2025-04-05 12:19:42.095879 | orchestrator |  "sdb": { 2025-04-05 12:19:42.096820 | orchestrator |  "osd_lvm_uuid": "4aac11a6-844c-526d-9ac8-c50cbafa4162" 2025-04-05 12:19:42.097590 | orchestrator |  }, 2025-04-05 12:19:42.098303 | orchestrator |  "sdc": { 2025-04-05 12:19:42.099116 | orchestrator |  "osd_lvm_uuid": "7b2d6610-beab-5485-bcb7-dfee77450e0c" 2025-04-05 12:19:42.099855 | orchestrator |  } 2025-04-05 12:19:42.100464 | orchestrator |  }, 2025-04-05 12:19:42.101221 | orchestrator |  "lvm_volumes": [ 2025-04-05 12:19:42.101683 | orchestrator |  { 2025-04-05 12:19:42.102226 | orchestrator |  "data": "osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162", 2025-04-05 12:19:42.102598 | orchestrator |  "data_vg": "ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162" 2025-04-05 12:19:42.103241 | orchestrator |  }, 2025-04-05 12:19:42.103506 | orchestrator |  { 2025-04-05 12:19:42.104016 | orchestrator |  "data": "osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c", 2025-04-05 12:19:42.104505 | orchestrator |  "data_vg": "ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c" 2025-04-05 12:19:42.104892 | orchestrator |  } 2025-04-05 12:19:42.105287 | orchestrator |  ] 2025-04-05 12:19:42.105685 | orchestrator |  } 2025-04-05 12:19:42.106102 | orchestrator | } 2025-04-05 12:19:42.106558 | orchestrator | 2025-04-05 12:19:42.106810 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-05 12:19:42.107181 | orchestrator | Saturday 05 April 2025 12:19:42 +0000 (0:00:00.256) 0:00:39.021 ******** 2025-04-05 12:19:43.163116 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-05 12:19:43.163833 | orchestrator | 2025-04-05 12:19:43.164522 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:19:43.164760 | orchestrator | 2025-04-05 12:19:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:19:43.165032 | orchestrator | 2025-04-05 12:19:43 | INFO  | Please wait and do not abort execution. 2025-04-05 12:19:43.166117 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-05 12:19:43.167784 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-05 12:19:43.168220 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-05 12:19:43.168842 | orchestrator | 2025-04-05 12:19:43.170213 | orchestrator | 2025-04-05 12:19:43.171150 | orchestrator | 2025-04-05 12:19:43.171958 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:19:43.172359 | orchestrator | Saturday 05 April 2025 12:19:43 +0000 (0:00:01.067) 0:00:40.089 ******** 2025-04-05 12:19:43.173205 | orchestrator | =============================================================================== 2025-04-05 12:19:43.174924 | orchestrator | Write configuration file ------------------------------------------------ 4.10s 2025-04-05 12:19:43.175373 | orchestrator | Add known links to the list of available block devices ------------------ 1.47s 2025-04-05 12:19:43.176018 | orchestrator | Add known partitions to the list of available block devices ------------- 1.46s 2025-04-05 12:19:43.176380 | orchestrator | Print configuration data ------------------------------------------------ 0.90s 2025-04-05 12:19:43.176871 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2025-04-05 12:19:43.177483 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.77s 2025-04-05 12:19:43.177806 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-04-05 12:19:43.178805 | orchestrator | Get initial list of available block devices ----------------------------- 0.68s 2025-04-05 12:19:43.179203 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-04-05 12:19:43.179853 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-04-05 12:19:43.180296 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.58s 2025-04-05 12:19:43.181212 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2025-04-05 12:19:43.181523 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2025-04-05 12:19:43.182135 | orchestrator | Print WAL devices ------------------------------------------------------- 0.56s 2025-04-05 12:19:43.182361 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2025-04-05 12:19:43.182820 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2025-04-05 12:19:43.183574 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.54s 2025-04-05 12:19:43.183770 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.54s 2025-04-05 12:19:43.184569 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.52s 2025-04-05 12:19:43.185042 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2025-04-05 12:19:55.324730 | orchestrator | 2025-04-05 12:19:55 | INFO  | Task 91ab24da-1337-4592-b914-e1389ff42477 is running in background. Output coming soon. 2025-04-05 12:20:19.056884 | orchestrator | 2025-04-05 12:20:10 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-04-05 12:20:20.569676 | orchestrator | 2025-04-05 12:20:10 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-04-05 12:20:20.569739 | orchestrator | 2025-04-05 12:20:10 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-04-05 12:20:20.569755 | orchestrator | 2025-04-05 12:20:11 | INFO  | Handling group overwrites in 99-overwrite 2025-04-05 12:20:20.569825 | orchestrator | 2025-04-05 12:20:11 | INFO  | Removing group ceph-mds from 50-ceph 2025-04-05 12:20:20.569856 | orchestrator | 2025-04-05 12:20:11 | INFO  | Removing group ceph-rgw from 50-ceph 2025-04-05 12:20:20.569871 | orchestrator | 2025-04-05 12:20:11 | INFO  | Removing group netbird:children from 50-infrastruture 2025-04-05 12:20:20.569885 | orchestrator | 2025-04-05 12:20:11 | INFO  | Removing group storage:children from 50-kolla 2025-04-05 12:20:20.569900 | orchestrator | 2025-04-05 12:20:11 | INFO  | Removing group frr:children from 60-generic 2025-04-05 12:20:20.569914 | orchestrator | 2025-04-05 12:20:11 | INFO  | Handling group overwrites in 20-roles 2025-04-05 12:20:20.569928 | orchestrator | 2025-04-05 12:20:11 | INFO  | Removing group k3s_node from 50-infrastruture 2025-04-05 12:20:20.569966 | orchestrator | 2025-04-05 12:20:11 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-04-05 12:20:20.569981 | orchestrator | 2025-04-05 12:20:18 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-04-05 12:20:20.570008 | orchestrator | 2025-04-05 12:20:20 | INFO  | Task 8adb24d0-7f34-458b-9fed-e62cb2cff501 (ceph-create-lvm-devices) was prepared for execution. 2025-04-05 12:20:23.734680 | orchestrator | 2025-04-05 12:20:20 | INFO  | It takes a moment until task 8adb24d0-7f34-458b-9fed-e62cb2cff501 (ceph-create-lvm-devices) has been started and output is visible here. 2025-04-05 12:20:23.734772 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-05 12:20:24.103483 | orchestrator | 2025-04-05 12:20:24.104993 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-05 12:20:24.106582 | orchestrator | 2025-04-05 12:20:24.106921 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-05 12:20:24.108268 | orchestrator | Saturday 05 April 2025 12:20:24 +0000 (0:00:00.321) 0:00:00.321 ******** 2025-04-05 12:20:24.327949 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-05 12:20:24.328083 | orchestrator | 2025-04-05 12:20:24.330115 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-05 12:20:24.330936 | orchestrator | Saturday 05 April 2025 12:20:24 +0000 (0:00:00.223) 0:00:00.545 ******** 2025-04-05 12:20:24.535355 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:20:24.537765 | orchestrator | 2025-04-05 12:20:24.540334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:24.540372 | orchestrator | Saturday 05 April 2025 12:20:24 +0000 (0:00:00.209) 0:00:00.754 ******** 2025-04-05 12:20:25.133127 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-04-05 12:20:25.134746 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-04-05 12:20:25.135958 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-04-05 12:20:25.139190 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-04-05 12:20:25.139259 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-04-05 12:20:25.141643 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-04-05 12:20:25.141943 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-04-05 12:20:25.144521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-04-05 12:20:25.146639 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-04-05 12:20:25.148048 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-04-05 12:20:25.149185 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-04-05 12:20:25.150219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-04-05 12:20:25.151172 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-04-05 12:20:25.152868 | orchestrator | 2025-04-05 12:20:25.154144 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:25.154565 | orchestrator | Saturday 05 April 2025 12:20:25 +0000 (0:00:00.597) 0:00:01.352 ******** 2025-04-05 12:20:25.317454 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:25.317633 | orchestrator | 2025-04-05 12:20:25.318127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:25.319952 | orchestrator | Saturday 05 April 2025 12:20:25 +0000 (0:00:00.184) 0:00:01.536 ******** 2025-04-05 12:20:25.494405 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:25.674145 | orchestrator | 2025-04-05 12:20:25.674220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:25.674236 | orchestrator | Saturday 05 April 2025 12:20:25 +0000 (0:00:00.175) 0:00:01.712 ******** 2025-04-05 12:20:25.674259 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:25.674699 | orchestrator | 2025-04-05 12:20:25.675058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:25.678639 | orchestrator | Saturday 05 April 2025 12:20:25 +0000 (0:00:00.181) 0:00:01.894 ******** 2025-04-05 12:20:25.853284 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:25.853933 | orchestrator | 2025-04-05 12:20:25.854604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:25.857761 | orchestrator | Saturday 05 April 2025 12:20:25 +0000 (0:00:00.179) 0:00:02.073 ******** 2025-04-05 12:20:26.037866 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:26.039200 | orchestrator | 2025-04-05 12:20:26.041770 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:26.217999 | orchestrator | Saturday 05 April 2025 12:20:26 +0000 (0:00:00.184) 0:00:02.258 ******** 2025-04-05 12:20:26.218155 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:26.221036 | orchestrator | 2025-04-05 12:20:26.221136 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:26.221160 | orchestrator | Saturday 05 April 2025 12:20:26 +0000 (0:00:00.178) 0:00:02.436 ******** 2025-04-05 12:20:26.396676 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:26.399611 | orchestrator | 2025-04-05 12:20:26.579930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:26.579985 | orchestrator | Saturday 05 April 2025 12:20:26 +0000 (0:00:00.179) 0:00:02.616 ******** 2025-04-05 12:20:26.580007 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:26.583264 | orchestrator | 2025-04-05 12:20:26.585047 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:26.585300 | orchestrator | Saturday 05 April 2025 12:20:26 +0000 (0:00:00.179) 0:00:02.796 ******** 2025-04-05 12:20:27.080710 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04) 2025-04-05 12:20:27.080889 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04) 2025-04-05 12:20:27.081016 | orchestrator | 2025-04-05 12:20:27.081455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:27.081764 | orchestrator | Saturday 05 April 2025 12:20:27 +0000 (0:00:00.505) 0:00:03.301 ******** 2025-04-05 12:20:27.537570 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4656da48-57a2-4eb8-982a-d76718d1cb02) 2025-04-05 12:20:27.540654 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4656da48-57a2-4eb8-982a-d76718d1cb02) 2025-04-05 12:20:27.540762 | orchestrator | 2025-04-05 12:20:27.540808 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:27.540830 | orchestrator | Saturday 05 April 2025 12:20:27 +0000 (0:00:00.454) 0:00:03.755 ******** 2025-04-05 12:20:27.877013 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_213baff1-89a7-4ff7-8a44-f121feb76d26) 2025-04-05 12:20:27.877234 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_213baff1-89a7-4ff7-8a44-f121feb76d26) 2025-04-05 12:20:27.877463 | orchestrator | 2025-04-05 12:20:27.878393 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:27.883674 | orchestrator | Saturday 05 April 2025 12:20:27 +0000 (0:00:00.341) 0:00:04.097 ******** 2025-04-05 12:20:28.236259 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ff9999ad-bea3-493e-9af1-c705049c2ab2) 2025-04-05 12:20:28.237355 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ff9999ad-bea3-493e-9af1-c705049c2ab2) 2025-04-05 12:20:28.238471 | orchestrator | 2025-04-05 12:20:28.239644 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:28.240062 | orchestrator | Saturday 05 April 2025 12:20:28 +0000 (0:00:00.357) 0:00:04.454 ******** 2025-04-05 12:20:28.488296 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-05 12:20:28.491362 | orchestrator | 2025-04-05 12:20:28.492245 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:28.493503 | orchestrator | Saturday 05 April 2025 12:20:28 +0000 (0:00:00.252) 0:00:04.707 ******** 2025-04-05 12:20:28.935995 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-04-05 12:20:28.936917 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-04-05 12:20:28.938569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-04-05 12:20:28.939343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-04-05 12:20:28.939374 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-04-05 12:20:28.940480 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-04-05 12:20:28.941033 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-04-05 12:20:28.941854 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-04-05 12:20:28.942160 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-04-05 12:20:28.942894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-04-05 12:20:28.943374 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-04-05 12:20:28.943852 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-04-05 12:20:28.944810 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-04-05 12:20:28.944913 | orchestrator | 2025-04-05 12:20:28.947888 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:28.949630 | orchestrator | Saturday 05 April 2025 12:20:28 +0000 (0:00:00.449) 0:00:05.156 ******** 2025-04-05 12:20:29.116836 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:29.119276 | orchestrator | 2025-04-05 12:20:29.119309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:29.119331 | orchestrator | Saturday 05 April 2025 12:20:29 +0000 (0:00:00.179) 0:00:05.335 ******** 2025-04-05 12:20:29.295096 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:29.295214 | orchestrator | 2025-04-05 12:20:29.295449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:29.295813 | orchestrator | Saturday 05 April 2025 12:20:29 +0000 (0:00:00.175) 0:00:05.511 ******** 2025-04-05 12:20:29.454256 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:29.622346 | orchestrator | 2025-04-05 12:20:29.622389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:29.622407 | orchestrator | Saturday 05 April 2025 12:20:29 +0000 (0:00:00.159) 0:00:05.671 ******** 2025-04-05 12:20:29.622431 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:29.623239 | orchestrator | 2025-04-05 12:20:29.625151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:29.625644 | orchestrator | Saturday 05 April 2025 12:20:29 +0000 (0:00:00.170) 0:00:05.842 ******** 2025-04-05 12:20:30.018165 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:30.018302 | orchestrator | 2025-04-05 12:20:30.020692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:30.195353 | orchestrator | Saturday 05 April 2025 12:20:30 +0000 (0:00:00.395) 0:00:06.237 ******** 2025-04-05 12:20:30.195420 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:30.195905 | orchestrator | 2025-04-05 12:20:30.196960 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:30.197605 | orchestrator | Saturday 05 April 2025 12:20:30 +0000 (0:00:00.177) 0:00:06.415 ******** 2025-04-05 12:20:30.380199 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:30.381063 | orchestrator | 2025-04-05 12:20:30.381666 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:30.381962 | orchestrator | Saturday 05 April 2025 12:20:30 +0000 (0:00:00.183) 0:00:06.598 ******** 2025-04-05 12:20:30.559408 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:30.559866 | orchestrator | 2025-04-05 12:20:30.561084 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:30.561564 | orchestrator | Saturday 05 April 2025 12:20:30 +0000 (0:00:00.180) 0:00:06.779 ******** 2025-04-05 12:20:31.143002 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-04-05 12:20:31.143144 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-04-05 12:20:31.143178 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-04-05 12:20:31.143213 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-04-05 12:20:31.143448 | orchestrator | 2025-04-05 12:20:31.144093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:31.146162 | orchestrator | Saturday 05 April 2025 12:20:31 +0000 (0:00:00.582) 0:00:07.361 ******** 2025-04-05 12:20:31.332142 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:31.333990 | orchestrator | 2025-04-05 12:20:31.334211 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:31.334711 | orchestrator | Saturday 05 April 2025 12:20:31 +0000 (0:00:00.189) 0:00:07.551 ******** 2025-04-05 12:20:31.512038 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:31.512377 | orchestrator | 2025-04-05 12:20:31.513692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:31.514230 | orchestrator | Saturday 05 April 2025 12:20:31 +0000 (0:00:00.180) 0:00:07.731 ******** 2025-04-05 12:20:31.689927 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:31.690532 | orchestrator | 2025-04-05 12:20:31.691140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:31.694700 | orchestrator | Saturday 05 April 2025 12:20:31 +0000 (0:00:00.177) 0:00:07.909 ******** 2025-04-05 12:20:31.864453 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:31.866683 | orchestrator | 2025-04-05 12:20:31.867368 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-05 12:20:31.867399 | orchestrator | Saturday 05 April 2025 12:20:31 +0000 (0:00:00.174) 0:00:08.084 ******** 2025-04-05 12:20:31.990008 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:31.992675 | orchestrator | 2025-04-05 12:20:31.992766 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-05 12:20:31.992805 | orchestrator | Saturday 05 April 2025 12:20:31 +0000 (0:00:00.125) 0:00:08.209 ******** 2025-04-05 12:20:32.187995 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ad0d437a-29fb-56b5-bf7c-f26bd837f294'}}) 2025-04-05 12:20:32.189499 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'}}) 2025-04-05 12:20:32.189929 | orchestrator | 2025-04-05 12:20:32.190297 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-05 12:20:32.190821 | orchestrator | Saturday 05 April 2025 12:20:32 +0000 (0:00:00.198) 0:00:08.408 ******** 2025-04-05 12:20:34.271305 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'}) 2025-04-05 12:20:34.271921 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'}) 2025-04-05 12:20:34.273015 | orchestrator | 2025-04-05 12:20:34.273737 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-05 12:20:34.274853 | orchestrator | Saturday 05 April 2025 12:20:34 +0000 (0:00:02.081) 0:00:10.489 ******** 2025-04-05 12:20:34.437109 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'})  2025-04-05 12:20:34.437933 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'})  2025-04-05 12:20:34.438855 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:34.439662 | orchestrator | 2025-04-05 12:20:34.440712 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-05 12:20:34.441500 | orchestrator | Saturday 05 April 2025 12:20:34 +0000 (0:00:00.167) 0:00:10.656 ******** 2025-04-05 12:20:35.917084 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'}) 2025-04-05 12:20:35.917711 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'}) 2025-04-05 12:20:35.918197 | orchestrator | 2025-04-05 12:20:35.920635 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-05 12:20:35.920944 | orchestrator | Saturday 05 April 2025 12:20:35 +0000 (0:00:01.478) 0:00:12.135 ******** 2025-04-05 12:20:36.083135 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'})  2025-04-05 12:20:36.083417 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'})  2025-04-05 12:20:36.083825 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:36.085300 | orchestrator | 2025-04-05 12:20:36.228322 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-05 12:20:36.228390 | orchestrator | Saturday 05 April 2025 12:20:36 +0000 (0:00:00.166) 0:00:12.302 ******** 2025-04-05 12:20:36.228415 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:36.229239 | orchestrator | 2025-04-05 12:20:36.229892 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-05 12:20:36.230897 | orchestrator | Saturday 05 April 2025 12:20:36 +0000 (0:00:00.145) 0:00:12.447 ******** 2025-04-05 12:20:36.388456 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'})  2025-04-05 12:20:36.390741 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'})  2025-04-05 12:20:36.391565 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:36.391596 | orchestrator | 2025-04-05 12:20:36.392095 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-05 12:20:36.392925 | orchestrator | Saturday 05 April 2025 12:20:36 +0000 (0:00:00.159) 0:00:12.606 ******** 2025-04-05 12:20:36.526347 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:36.526527 | orchestrator | 2025-04-05 12:20:36.526557 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-05 12:20:36.526635 | orchestrator | Saturday 05 April 2025 12:20:36 +0000 (0:00:00.139) 0:00:12.745 ******** 2025-04-05 12:20:36.700729 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'})  2025-04-05 12:20:36.701550 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'})  2025-04-05 12:20:36.703248 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:36.704904 | orchestrator | 2025-04-05 12:20:36.843144 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-05 12:20:36.843247 | orchestrator | Saturday 05 April 2025 12:20:36 +0000 (0:00:00.174) 0:00:12.920 ******** 2025-04-05 12:20:36.843278 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:36.844205 | orchestrator | 2025-04-05 12:20:36.845014 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-05 12:20:36.846572 | orchestrator | Saturday 05 April 2025 12:20:36 +0000 (0:00:00.142) 0:00:13.062 ******** 2025-04-05 12:20:37.155750 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'})  2025-04-05 12:20:37.156759 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'})  2025-04-05 12:20:37.157479 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:37.160259 | orchestrator | 2025-04-05 12:20:37.338145 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-05 12:20:37.338214 | orchestrator | Saturday 05 April 2025 12:20:37 +0000 (0:00:00.312) 0:00:13.374 ******** 2025-04-05 12:20:37.338240 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:20:37.338941 | orchestrator | 2025-04-05 12:20:37.339890 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-05 12:20:37.342612 | orchestrator | Saturday 05 April 2025 12:20:37 +0000 (0:00:00.182) 0:00:13.556 ******** 2025-04-05 12:20:37.502514 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'})  2025-04-05 12:20:37.505287 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'})  2025-04-05 12:20:37.505705 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:37.506324 | orchestrator | 2025-04-05 12:20:37.506824 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-05 12:20:37.507519 | orchestrator | Saturday 05 April 2025 12:20:37 +0000 (0:00:00.163) 0:00:13.720 ******** 2025-04-05 12:20:37.670894 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'})  2025-04-05 12:20:37.672026 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'})  2025-04-05 12:20:37.672740 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:37.673665 | orchestrator | 2025-04-05 12:20:37.674391 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-05 12:20:37.675067 | orchestrator | Saturday 05 April 2025 12:20:37 +0000 (0:00:00.169) 0:00:13.890 ******** 2025-04-05 12:20:37.842702 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'})  2025-04-05 12:20:37.843159 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'})  2025-04-05 12:20:37.843525 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:37.844455 | orchestrator | 2025-04-05 12:20:37.845176 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-05 12:20:37.845511 | orchestrator | Saturday 05 April 2025 12:20:37 +0000 (0:00:00.172) 0:00:14.062 ******** 2025-04-05 12:20:37.982165 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:37.982291 | orchestrator | 2025-04-05 12:20:37.982985 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-05 12:20:37.983489 | orchestrator | Saturday 05 April 2025 12:20:37 +0000 (0:00:00.138) 0:00:14.201 ******** 2025-04-05 12:20:38.122403 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:38.123018 | orchestrator | 2025-04-05 12:20:38.123521 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-05 12:20:38.124375 | orchestrator | Saturday 05 April 2025 12:20:38 +0000 (0:00:00.140) 0:00:14.341 ******** 2025-04-05 12:20:38.264420 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:38.264983 | orchestrator | 2025-04-05 12:20:38.266627 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-05 12:20:38.423936 | orchestrator | Saturday 05 April 2025 12:20:38 +0000 (0:00:00.142) 0:00:14.483 ******** 2025-04-05 12:20:38.423997 | orchestrator | ok: [testbed-node-3] => { 2025-04-05 12:20:38.424292 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-05 12:20:38.424374 | orchestrator | } 2025-04-05 12:20:38.425035 | orchestrator | 2025-04-05 12:20:38.425118 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-05 12:20:38.425505 | orchestrator | Saturday 05 April 2025 12:20:38 +0000 (0:00:00.159) 0:00:14.643 ******** 2025-04-05 12:20:38.563262 | orchestrator | ok: [testbed-node-3] => { 2025-04-05 12:20:38.564169 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-05 12:20:38.564207 | orchestrator | } 2025-04-05 12:20:38.564230 | orchestrator | 2025-04-05 12:20:38.564385 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-05 12:20:38.564603 | orchestrator | Saturday 05 April 2025 12:20:38 +0000 (0:00:00.139) 0:00:14.782 ******** 2025-04-05 12:20:38.708281 | orchestrator | ok: [testbed-node-3] => { 2025-04-05 12:20:38.710381 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-05 12:20:38.710643 | orchestrator | } 2025-04-05 12:20:38.710673 | orchestrator | 2025-04-05 12:20:38.710687 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-05 12:20:38.710708 | orchestrator | Saturday 05 April 2025 12:20:38 +0000 (0:00:00.142) 0:00:14.925 ******** 2025-04-05 12:20:39.446265 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:20:39.446505 | orchestrator | 2025-04-05 12:20:39.448249 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-05 12:20:39.449015 | orchestrator | Saturday 05 April 2025 12:20:39 +0000 (0:00:00.738) 0:00:15.664 ******** 2025-04-05 12:20:39.930583 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:20:39.931635 | orchestrator | 2025-04-05 12:20:39.932158 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-05 12:20:39.933544 | orchestrator | Saturday 05 April 2025 12:20:39 +0000 (0:00:00.483) 0:00:16.147 ******** 2025-04-05 12:20:40.442844 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:20:40.443645 | orchestrator | 2025-04-05 12:20:40.444902 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-05 12:20:40.445512 | orchestrator | Saturday 05 April 2025 12:20:40 +0000 (0:00:00.513) 0:00:16.661 ******** 2025-04-05 12:20:40.593129 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:20:40.593722 | orchestrator | 2025-04-05 12:20:40.594643 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-05 12:20:40.595355 | orchestrator | Saturday 05 April 2025 12:20:40 +0000 (0:00:00.151) 0:00:16.812 ******** 2025-04-05 12:20:40.702411 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:40.702703 | orchestrator | 2025-04-05 12:20:40.703200 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-05 12:20:40.703842 | orchestrator | Saturday 05 April 2025 12:20:40 +0000 (0:00:00.109) 0:00:16.921 ******** 2025-04-05 12:20:40.814499 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:40.814585 | orchestrator | 2025-04-05 12:20:40.815280 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-05 12:20:40.815966 | orchestrator | Saturday 05 April 2025 12:20:40 +0000 (0:00:00.111) 0:00:17.033 ******** 2025-04-05 12:20:40.961694 | orchestrator | ok: [testbed-node-3] => { 2025-04-05 12:20:40.962132 | orchestrator |  "vgs_report": { 2025-04-05 12:20:40.963721 | orchestrator |  "vg": [] 2025-04-05 12:20:40.964006 | orchestrator |  } 2025-04-05 12:20:40.965052 | orchestrator | } 2025-04-05 12:20:40.967056 | orchestrator | 2025-04-05 12:20:41.101898 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-05 12:20:41.101993 | orchestrator | Saturday 05 April 2025 12:20:40 +0000 (0:00:00.147) 0:00:17.181 ******** 2025-04-05 12:20:41.102073 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:41.102168 | orchestrator | 2025-04-05 12:20:41.102596 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-05 12:20:41.103146 | orchestrator | Saturday 05 April 2025 12:20:41 +0000 (0:00:00.139) 0:00:17.321 ******** 2025-04-05 12:20:41.242417 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:41.243447 | orchestrator | 2025-04-05 12:20:41.244036 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-05 12:20:41.244887 | orchestrator | Saturday 05 April 2025 12:20:41 +0000 (0:00:00.140) 0:00:17.461 ******** 2025-04-05 12:20:41.384976 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:41.385357 | orchestrator | 2025-04-05 12:20:41.386271 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-05 12:20:41.387066 | orchestrator | Saturday 05 April 2025 12:20:41 +0000 (0:00:00.142) 0:00:17.604 ******** 2025-04-05 12:20:41.527527 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:41.528982 | orchestrator | 2025-04-05 12:20:41.531044 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-05 12:20:41.531731 | orchestrator | Saturday 05 April 2025 12:20:41 +0000 (0:00:00.141) 0:00:17.745 ******** 2025-04-05 12:20:41.670330 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:41.670949 | orchestrator | 2025-04-05 12:20:41.672092 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-05 12:20:41.673043 | orchestrator | Saturday 05 April 2025 12:20:41 +0000 (0:00:00.143) 0:00:17.889 ******** 2025-04-05 12:20:41.965301 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:41.967371 | orchestrator | 2025-04-05 12:20:41.969440 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-05 12:20:41.970080 | orchestrator | Saturday 05 April 2025 12:20:41 +0000 (0:00:00.294) 0:00:18.183 ******** 2025-04-05 12:20:42.113985 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:42.114230 | orchestrator | 2025-04-05 12:20:42.114759 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-05 12:20:42.115460 | orchestrator | Saturday 05 April 2025 12:20:42 +0000 (0:00:00.149) 0:00:18.333 ******** 2025-04-05 12:20:42.253116 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:42.253835 | orchestrator | 2025-04-05 12:20:42.254657 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-05 12:20:42.255644 | orchestrator | Saturday 05 April 2025 12:20:42 +0000 (0:00:00.138) 0:00:18.472 ******** 2025-04-05 12:20:42.398484 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:42.399624 | orchestrator | 2025-04-05 12:20:42.400113 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-05 12:20:42.401271 | orchestrator | Saturday 05 April 2025 12:20:42 +0000 (0:00:00.145) 0:00:18.618 ******** 2025-04-05 12:20:42.543940 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:42.544705 | orchestrator | 2025-04-05 12:20:42.545322 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-05 12:20:42.546261 | orchestrator | Saturday 05 April 2025 12:20:42 +0000 (0:00:00.143) 0:00:18.761 ******** 2025-04-05 12:20:42.686766 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:42.688010 | orchestrator | 2025-04-05 12:20:42.689020 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-05 12:20:42.690839 | orchestrator | Saturday 05 April 2025 12:20:42 +0000 (0:00:00.144) 0:00:18.906 ******** 2025-04-05 12:20:42.825997 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:42.826205 | orchestrator | 2025-04-05 12:20:42.826747 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-05 12:20:42.827470 | orchestrator | Saturday 05 April 2025 12:20:42 +0000 (0:00:00.139) 0:00:19.046 ******** 2025-04-05 12:20:42.959255 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:42.959949 | orchestrator | 2025-04-05 12:20:42.961005 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-05 12:20:42.961959 | orchestrator | Saturday 05 April 2025 12:20:42 +0000 (0:00:00.132) 0:00:19.178 ******** 2025-04-05 12:20:43.099589 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:43.099781 | orchestrator | 2025-04-05 12:20:43.100663 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-05 12:20:43.101442 | orchestrator | Saturday 05 April 2025 12:20:43 +0000 (0:00:00.140) 0:00:19.318 ******** 2025-04-05 12:20:43.271609 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'})  2025-04-05 12:20:43.272564 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'})  2025-04-05 12:20:43.276685 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:43.277117 | orchestrator | 2025-04-05 12:20:43.278352 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-05 12:20:43.279174 | orchestrator | Saturday 05 April 2025 12:20:43 +0000 (0:00:00.171) 0:00:19.490 ******** 2025-04-05 12:20:43.432566 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'})  2025-04-05 12:20:43.433886 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'})  2025-04-05 12:20:43.434836 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:43.435665 | orchestrator | 2025-04-05 12:20:43.436979 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-05 12:20:43.438235 | orchestrator | Saturday 05 April 2025 12:20:43 +0000 (0:00:00.161) 0:00:19.651 ******** 2025-04-05 12:20:43.770369 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'})  2025-04-05 12:20:43.771059 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'})  2025-04-05 12:20:43.771921 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:43.774558 | orchestrator | 2025-04-05 12:20:43.775004 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-05 12:20:43.775456 | orchestrator | Saturday 05 April 2025 12:20:43 +0000 (0:00:00.337) 0:00:19.988 ******** 2025-04-05 12:20:43.938889 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'})  2025-04-05 12:20:43.939313 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'})  2025-04-05 12:20:43.939351 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:43.941056 | orchestrator | 2025-04-05 12:20:43.942272 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-05 12:20:43.943251 | orchestrator | Saturday 05 April 2025 12:20:43 +0000 (0:00:00.167) 0:00:20.156 ******** 2025-04-05 12:20:44.094271 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'})  2025-04-05 12:20:44.095185 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'})  2025-04-05 12:20:44.095231 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:44.097274 | orchestrator | 2025-04-05 12:20:44.097749 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-05 12:20:44.098285 | orchestrator | Saturday 05 April 2025 12:20:44 +0000 (0:00:00.155) 0:00:20.312 ******** 2025-04-05 12:20:44.258444 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'})  2025-04-05 12:20:44.261078 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'})  2025-04-05 12:20:44.262132 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:44.262164 | orchestrator | 2025-04-05 12:20:44.262185 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-05 12:20:44.428287 | orchestrator | Saturday 05 April 2025 12:20:44 +0000 (0:00:00.164) 0:00:20.477 ******** 2025-04-05 12:20:44.428393 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'})  2025-04-05 12:20:44.428478 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'})  2025-04-05 12:20:44.428502 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:44.428526 | orchestrator | 2025-04-05 12:20:44.429215 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-05 12:20:44.429572 | orchestrator | Saturday 05 April 2025 12:20:44 +0000 (0:00:00.165) 0:00:20.642 ******** 2025-04-05 12:20:44.591870 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'})  2025-04-05 12:20:44.592170 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'})  2025-04-05 12:20:44.592965 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:44.593395 | orchestrator | 2025-04-05 12:20:44.594175 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-05 12:20:44.594592 | orchestrator | Saturday 05 April 2025 12:20:44 +0000 (0:00:00.168) 0:00:20.810 ******** 2025-04-05 12:20:45.142128 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:20:45.142714 | orchestrator | 2025-04-05 12:20:45.142756 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-05 12:20:45.143670 | orchestrator | Saturday 05 April 2025 12:20:45 +0000 (0:00:00.548) 0:00:21.358 ******** 2025-04-05 12:20:45.715987 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:20:45.716612 | orchestrator | 2025-04-05 12:20:45.717569 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-05 12:20:45.718551 | orchestrator | Saturday 05 April 2025 12:20:45 +0000 (0:00:00.575) 0:00:21.934 ******** 2025-04-05 12:20:45.865582 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:20:45.866068 | orchestrator | 2025-04-05 12:20:45.867113 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-05 12:20:45.868015 | orchestrator | Saturday 05 April 2025 12:20:45 +0000 (0:00:00.150) 0:00:22.085 ******** 2025-04-05 12:20:46.049180 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'vg_name': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'}) 2025-04-05 12:20:46.049396 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'vg_name': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'}) 2025-04-05 12:20:46.050160 | orchestrator | 2025-04-05 12:20:46.050871 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-05 12:20:46.052159 | orchestrator | Saturday 05 April 2025 12:20:46 +0000 (0:00:00.183) 0:00:22.268 ******** 2025-04-05 12:20:46.230557 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'})  2025-04-05 12:20:46.231573 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'})  2025-04-05 12:20:46.231757 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:46.233605 | orchestrator | 2025-04-05 12:20:46.235045 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-05 12:20:46.235513 | orchestrator | Saturday 05 April 2025 12:20:46 +0000 (0:00:00.180) 0:00:22.449 ******** 2025-04-05 12:20:46.544415 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'})  2025-04-05 12:20:46.545849 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'})  2025-04-05 12:20:46.546543 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:46.547383 | orchestrator | 2025-04-05 12:20:46.548434 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-05 12:20:46.549175 | orchestrator | Saturday 05 April 2025 12:20:46 +0000 (0:00:00.315) 0:00:22.764 ******** 2025-04-05 12:20:46.717630 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'})  2025-04-05 12:20:46.718348 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'})  2025-04-05 12:20:46.719106 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:20:46.721495 | orchestrator | 2025-04-05 12:20:47.394862 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-05 12:20:47.394961 | orchestrator | Saturday 05 April 2025 12:20:46 +0000 (0:00:00.172) 0:00:22.936 ******** 2025-04-05 12:20:47.394994 | orchestrator | ok: [testbed-node-3] => { 2025-04-05 12:20:47.395511 | orchestrator |  "lvm_report": { 2025-04-05 12:20:47.396170 | orchestrator |  "lv": [ 2025-04-05 12:20:47.396987 | orchestrator |  { 2025-04-05 12:20:47.398011 | orchestrator |  "lv_name": "osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26", 2025-04-05 12:20:47.398612 | orchestrator |  "vg_name": "ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26" 2025-04-05 12:20:47.399562 | orchestrator |  }, 2025-04-05 12:20:47.400471 | orchestrator |  { 2025-04-05 12:20:47.401115 | orchestrator |  "lv_name": "osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294", 2025-04-05 12:20:47.401858 | orchestrator |  "vg_name": "ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294" 2025-04-05 12:20:47.402589 | orchestrator |  } 2025-04-05 12:20:47.403382 | orchestrator |  ], 2025-04-05 12:20:47.404056 | orchestrator |  "pv": [ 2025-04-05 12:20:47.404491 | orchestrator |  { 2025-04-05 12:20:47.404902 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-05 12:20:47.405625 | orchestrator |  "vg_name": "ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294" 2025-04-05 12:20:47.405890 | orchestrator |  }, 2025-04-05 12:20:47.406513 | orchestrator |  { 2025-04-05 12:20:47.406931 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-05 12:20:47.407249 | orchestrator |  "vg_name": "ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26" 2025-04-05 12:20:47.407870 | orchestrator |  } 2025-04-05 12:20:47.408114 | orchestrator |  ] 2025-04-05 12:20:47.408579 | orchestrator |  } 2025-04-05 12:20:47.409027 | orchestrator | } 2025-04-05 12:20:47.409514 | orchestrator | 2025-04-05 12:20:47.409726 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-05 12:20:47.410222 | orchestrator | 2025-04-05 12:20:47.410500 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-05 12:20:47.410873 | orchestrator | Saturday 05 April 2025 12:20:47 +0000 (0:00:00.677) 0:00:23.614 ******** 2025-04-05 12:20:47.644132 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-05 12:20:47.645253 | orchestrator | 2025-04-05 12:20:47.645742 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-05 12:20:47.646549 | orchestrator | Saturday 05 April 2025 12:20:47 +0000 (0:00:00.247) 0:00:23.861 ******** 2025-04-05 12:20:48.188937 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:20:48.189417 | orchestrator | 2025-04-05 12:20:48.189885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:48.190653 | orchestrator | Saturday 05 April 2025 12:20:48 +0000 (0:00:00.546) 0:00:24.408 ******** 2025-04-05 12:20:48.652259 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-04-05 12:20:48.653631 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-04-05 12:20:48.656108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-04-05 12:20:48.657456 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-04-05 12:20:48.658516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-04-05 12:20:48.660065 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-04-05 12:20:48.660395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-04-05 12:20:48.661382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-04-05 12:20:48.661905 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-04-05 12:20:48.662141 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-04-05 12:20:48.662896 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-04-05 12:20:48.663337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-04-05 12:20:48.663836 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-04-05 12:20:48.664779 | orchestrator | 2025-04-05 12:20:48.665001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:48.666128 | orchestrator | Saturday 05 April 2025 12:20:48 +0000 (0:00:00.460) 0:00:24.869 ******** 2025-04-05 12:20:48.846746 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:48.846942 | orchestrator | 2025-04-05 12:20:48.848065 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:48.848541 | orchestrator | Saturday 05 April 2025 12:20:48 +0000 (0:00:00.196) 0:00:25.065 ******** 2025-04-05 12:20:49.046880 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:49.047068 | orchestrator | 2025-04-05 12:20:49.047534 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:49.048401 | orchestrator | Saturday 05 April 2025 12:20:49 +0000 (0:00:00.200) 0:00:25.266 ******** 2025-04-05 12:20:49.237994 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:49.238313 | orchestrator | 2025-04-05 12:20:49.238940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:49.239553 | orchestrator | Saturday 05 April 2025 12:20:49 +0000 (0:00:00.191) 0:00:25.457 ******** 2025-04-05 12:20:49.431683 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:49.432059 | orchestrator | 2025-04-05 12:20:49.432108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:49.432817 | orchestrator | Saturday 05 April 2025 12:20:49 +0000 (0:00:00.193) 0:00:25.650 ******** 2025-04-05 12:20:49.632672 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:49.633279 | orchestrator | 2025-04-05 12:20:49.634125 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:49.634229 | orchestrator | Saturday 05 April 2025 12:20:49 +0000 (0:00:00.200) 0:00:25.850 ******** 2025-04-05 12:20:49.828977 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:49.829490 | orchestrator | 2025-04-05 12:20:49.830101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:49.830946 | orchestrator | Saturday 05 April 2025 12:20:49 +0000 (0:00:00.197) 0:00:26.048 ******** 2025-04-05 12:20:50.025098 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:50.025862 | orchestrator | 2025-04-05 12:20:50.026194 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:50.027025 | orchestrator | Saturday 05 April 2025 12:20:50 +0000 (0:00:00.196) 0:00:26.244 ******** 2025-04-05 12:20:50.220284 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:50.220599 | orchestrator | 2025-04-05 12:20:50.221090 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:50.221536 | orchestrator | Saturday 05 April 2025 12:20:50 +0000 (0:00:00.195) 0:00:26.440 ******** 2025-04-05 12:20:50.819294 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03) 2025-04-05 12:20:50.819475 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03) 2025-04-05 12:20:50.819897 | orchestrator | 2025-04-05 12:20:50.822994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:50.824383 | orchestrator | Saturday 05 April 2025 12:20:50 +0000 (0:00:00.597) 0:00:27.038 ******** 2025-04-05 12:20:51.230251 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5d2b1a52-3655-4f66-b4c6-42f0360176a6) 2025-04-05 12:20:51.231045 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5d2b1a52-3655-4f66-b4c6-42f0360176a6) 2025-04-05 12:20:51.231879 | orchestrator | 2025-04-05 12:20:51.232512 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:51.233307 | orchestrator | Saturday 05 April 2025 12:20:51 +0000 (0:00:00.409) 0:00:27.448 ******** 2025-04-05 12:20:51.643521 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ba8d5f0c-914f-4739-9d89-312c5c9b23ff) 2025-04-05 12:20:51.643990 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ba8d5f0c-914f-4739-9d89-312c5c9b23ff) 2025-04-05 12:20:51.644751 | orchestrator | 2025-04-05 12:20:51.645424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:51.647642 | orchestrator | Saturday 05 April 2025 12:20:51 +0000 (0:00:00.414) 0:00:27.862 ******** 2025-04-05 12:20:52.076323 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cfed707b-504f-4ce7-a138-034721a1d783) 2025-04-05 12:20:52.076913 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cfed707b-504f-4ce7-a138-034721a1d783) 2025-04-05 12:20:52.078184 | orchestrator | 2025-04-05 12:20:52.079619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:20:52.413969 | orchestrator | Saturday 05 April 2025 12:20:52 +0000 (0:00:00.431) 0:00:28.294 ******** 2025-04-05 12:20:52.414068 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-05 12:20:52.415292 | orchestrator | 2025-04-05 12:20:52.417040 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:52.417426 | orchestrator | Saturday 05 April 2025 12:20:52 +0000 (0:00:00.338) 0:00:28.632 ******** 2025-04-05 12:20:52.899556 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-04-05 12:20:52.900579 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-04-05 12:20:52.901481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-04-05 12:20:52.902681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-04-05 12:20:52.904001 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-04-05 12:20:52.905211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-04-05 12:20:52.905364 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-04-05 12:20:52.906402 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-04-05 12:20:52.906460 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-04-05 12:20:52.906902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-04-05 12:20:52.907413 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-04-05 12:20:52.907697 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-04-05 12:20:52.908141 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-04-05 12:20:52.908940 | orchestrator | 2025-04-05 12:20:52.909116 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:52.909531 | orchestrator | Saturday 05 April 2025 12:20:52 +0000 (0:00:00.486) 0:00:29.118 ******** 2025-04-05 12:20:53.083021 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:53.083260 | orchestrator | 2025-04-05 12:20:53.084119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:53.084890 | orchestrator | Saturday 05 April 2025 12:20:53 +0000 (0:00:00.183) 0:00:29.302 ******** 2025-04-05 12:20:53.279117 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:53.279635 | orchestrator | 2025-04-05 12:20:53.280433 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:53.280915 | orchestrator | Saturday 05 April 2025 12:20:53 +0000 (0:00:00.195) 0:00:29.498 ******** 2025-04-05 12:20:53.468234 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:53.469015 | orchestrator | 2025-04-05 12:20:53.472502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:53.975045 | orchestrator | Saturday 05 April 2025 12:20:53 +0000 (0:00:00.189) 0:00:29.687 ******** 2025-04-05 12:20:53.975137 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:53.977233 | orchestrator | 2025-04-05 12:20:53.977867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:53.978348 | orchestrator | Saturday 05 April 2025 12:20:53 +0000 (0:00:00.503) 0:00:30.191 ******** 2025-04-05 12:20:54.176051 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:54.176607 | orchestrator | 2025-04-05 12:20:54.177326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:54.179337 | orchestrator | Saturday 05 April 2025 12:20:54 +0000 (0:00:00.203) 0:00:30.394 ******** 2025-04-05 12:20:54.370973 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:54.371065 | orchestrator | 2025-04-05 12:20:54.371705 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:54.374765 | orchestrator | Saturday 05 April 2025 12:20:54 +0000 (0:00:00.195) 0:00:30.590 ******** 2025-04-05 12:20:54.571386 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:54.571779 | orchestrator | 2025-04-05 12:20:54.572465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:54.572969 | orchestrator | Saturday 05 April 2025 12:20:54 +0000 (0:00:00.200) 0:00:30.790 ******** 2025-04-05 12:20:54.767117 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:54.767742 | orchestrator | 2025-04-05 12:20:54.769981 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:55.398263 | orchestrator | Saturday 05 April 2025 12:20:54 +0000 (0:00:00.194) 0:00:30.985 ******** 2025-04-05 12:20:55.398371 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-04-05 12:20:55.398948 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-04-05 12:20:55.398980 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-04-05 12:20:55.399717 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-04-05 12:20:55.399776 | orchestrator | 2025-04-05 12:20:55.400647 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:55.400967 | orchestrator | Saturday 05 April 2025 12:20:55 +0000 (0:00:00.629) 0:00:31.614 ******** 2025-04-05 12:20:55.609494 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:55.610982 | orchestrator | 2025-04-05 12:20:55.611019 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:55.819983 | orchestrator | Saturday 05 April 2025 12:20:55 +0000 (0:00:00.214) 0:00:31.829 ******** 2025-04-05 12:20:55.820039 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:55.820292 | orchestrator | 2025-04-05 12:20:55.820322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:55.820962 | orchestrator | Saturday 05 April 2025 12:20:55 +0000 (0:00:00.209) 0:00:32.038 ******** 2025-04-05 12:20:56.009203 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:56.009312 | orchestrator | 2025-04-05 12:20:56.010087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:20:56.010177 | orchestrator | Saturday 05 April 2025 12:20:56 +0000 (0:00:00.190) 0:00:32.228 ******** 2025-04-05 12:20:56.205196 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:56.205701 | orchestrator | 2025-04-05 12:20:56.206333 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-05 12:20:56.206930 | orchestrator | Saturday 05 April 2025 12:20:56 +0000 (0:00:00.195) 0:00:32.424 ******** 2025-04-05 12:20:56.345611 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:56.346074 | orchestrator | 2025-04-05 12:20:56.346728 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-05 12:20:56.347233 | orchestrator | Saturday 05 April 2025 12:20:56 +0000 (0:00:00.140) 0:00:32.565 ******** 2025-04-05 12:20:56.733446 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'eb474160-46dc-5c48-a12b-143126b3371a'}}) 2025-04-05 12:20:56.733615 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'bddbd264-0785-5bf3-9ea2-553c515bd099'}}) 2025-04-05 12:20:56.734663 | orchestrator | 2025-04-05 12:20:56.734965 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-05 12:20:56.735735 | orchestrator | Saturday 05 April 2025 12:20:56 +0000 (0:00:00.387) 0:00:32.952 ******** 2025-04-05 12:20:58.424534 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'}) 2025-04-05 12:20:58.424699 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'}) 2025-04-05 12:20:58.425557 | orchestrator | 2025-04-05 12:20:58.426869 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-05 12:20:58.427052 | orchestrator | Saturday 05 April 2025 12:20:58 +0000 (0:00:01.689) 0:00:34.641 ******** 2025-04-05 12:20:58.576064 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'})  2025-04-05 12:20:58.576711 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'})  2025-04-05 12:20:58.576758 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:20:58.578330 | orchestrator | 2025-04-05 12:20:58.580741 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-05 12:20:58.581578 | orchestrator | Saturday 05 April 2025 12:20:58 +0000 (0:00:00.151) 0:00:34.793 ******** 2025-04-05 12:20:59.861734 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'}) 2025-04-05 12:20:59.862175 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'}) 2025-04-05 12:20:59.863055 | orchestrator | 2025-04-05 12:20:59.864860 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-05 12:20:59.865248 | orchestrator | Saturday 05 April 2025 12:20:59 +0000 (0:00:01.286) 0:00:36.080 ******** 2025-04-05 12:21:00.008549 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'})  2025-04-05 12:21:00.008866 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'})  2025-04-05 12:21:00.010581 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:00.011314 | orchestrator | 2025-04-05 12:21:00.012186 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-05 12:21:00.012960 | orchestrator | Saturday 05 April 2025 12:21:00 +0000 (0:00:00.147) 0:00:36.227 ******** 2025-04-05 12:21:00.146417 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:00.147418 | orchestrator | 2025-04-05 12:21:00.148284 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-05 12:21:00.148995 | orchestrator | Saturday 05 April 2025 12:21:00 +0000 (0:00:00.138) 0:00:36.366 ******** 2025-04-05 12:21:00.297100 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'})  2025-04-05 12:21:00.298513 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'})  2025-04-05 12:21:00.298756 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:00.299577 | orchestrator | 2025-04-05 12:21:00.300021 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-05 12:21:00.300809 | orchestrator | Saturday 05 April 2025 12:21:00 +0000 (0:00:00.149) 0:00:36.516 ******** 2025-04-05 12:21:00.427652 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:00.428257 | orchestrator | 2025-04-05 12:21:00.430466 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-05 12:21:00.569428 | orchestrator | Saturday 05 April 2025 12:21:00 +0000 (0:00:00.131) 0:00:36.647 ******** 2025-04-05 12:21:00.569509 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'})  2025-04-05 12:21:00.569895 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'})  2025-04-05 12:21:00.571870 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:00.572658 | orchestrator | 2025-04-05 12:21:00.572706 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-05 12:21:00.573151 | orchestrator | Saturday 05 April 2025 12:21:00 +0000 (0:00:00.142) 0:00:36.789 ******** 2025-04-05 12:21:00.787349 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:00.789009 | orchestrator | 2025-04-05 12:21:00.790340 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-05 12:21:00.948607 | orchestrator | Saturday 05 April 2025 12:21:00 +0000 (0:00:00.217) 0:00:37.007 ******** 2025-04-05 12:21:00.948709 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'})  2025-04-05 12:21:00.949024 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'})  2025-04-05 12:21:00.949994 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:00.950635 | orchestrator | 2025-04-05 12:21:00.951329 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-05 12:21:00.952214 | orchestrator | Saturday 05 April 2025 12:21:00 +0000 (0:00:00.160) 0:00:37.167 ******** 2025-04-05 12:21:01.082907 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:21:01.083587 | orchestrator | 2025-04-05 12:21:01.084424 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-05 12:21:01.084802 | orchestrator | Saturday 05 April 2025 12:21:01 +0000 (0:00:00.134) 0:00:37.301 ******** 2025-04-05 12:21:01.229990 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'})  2025-04-05 12:21:01.231490 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'})  2025-04-05 12:21:01.231965 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:01.231994 | orchestrator | 2025-04-05 12:21:01.232927 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-05 12:21:01.233126 | orchestrator | Saturday 05 April 2025 12:21:01 +0000 (0:00:00.147) 0:00:37.448 ******** 2025-04-05 12:21:01.389916 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'})  2025-04-05 12:21:01.391639 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'})  2025-04-05 12:21:01.393185 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:01.393707 | orchestrator | 2025-04-05 12:21:01.394710 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-05 12:21:01.395139 | orchestrator | Saturday 05 April 2025 12:21:01 +0000 (0:00:00.161) 0:00:37.609 ******** 2025-04-05 12:21:01.542838 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'})  2025-04-05 12:21:01.542970 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'})  2025-04-05 12:21:01.544215 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:01.544943 | orchestrator | 2025-04-05 12:21:01.545595 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-05 12:21:01.546108 | orchestrator | Saturday 05 April 2025 12:21:01 +0000 (0:00:00.150) 0:00:37.760 ******** 2025-04-05 12:21:01.682848 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:01.683584 | orchestrator | 2025-04-05 12:21:01.683938 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-05 12:21:01.684942 | orchestrator | Saturday 05 April 2025 12:21:01 +0000 (0:00:00.141) 0:00:37.902 ******** 2025-04-05 12:21:01.817952 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:01.818090 | orchestrator | 2025-04-05 12:21:01.819005 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-05 12:21:01.819652 | orchestrator | Saturday 05 April 2025 12:21:01 +0000 (0:00:00.135) 0:00:38.037 ******** 2025-04-05 12:21:01.947908 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:01.949064 | orchestrator | 2025-04-05 12:21:01.949532 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-05 12:21:01.949563 | orchestrator | Saturday 05 April 2025 12:21:01 +0000 (0:00:00.128) 0:00:38.166 ******** 2025-04-05 12:21:02.082929 | orchestrator | ok: [testbed-node-4] => { 2025-04-05 12:21:02.083488 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-05 12:21:02.085677 | orchestrator | } 2025-04-05 12:21:02.085925 | orchestrator | 2025-04-05 12:21:02.085958 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-05 12:21:02.086588 | orchestrator | Saturday 05 April 2025 12:21:02 +0000 (0:00:00.135) 0:00:38.302 ******** 2025-04-05 12:21:02.212620 | orchestrator | ok: [testbed-node-4] => { 2025-04-05 12:21:02.213728 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-05 12:21:02.216840 | orchestrator | } 2025-04-05 12:21:02.216922 | orchestrator | 2025-04-05 12:21:02.216941 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-05 12:21:02.216959 | orchestrator | Saturday 05 April 2025 12:21:02 +0000 (0:00:00.130) 0:00:38.432 ******** 2025-04-05 12:21:02.335692 | orchestrator | ok: [testbed-node-4] => { 2025-04-05 12:21:02.337073 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-05 12:21:02.337990 | orchestrator | } 2025-04-05 12:21:02.338403 | orchestrator | 2025-04-05 12:21:02.338904 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-05 12:21:02.339425 | orchestrator | Saturday 05 April 2025 12:21:02 +0000 (0:00:00.121) 0:00:38.554 ******** 2025-04-05 12:21:02.947109 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:21:02.948079 | orchestrator | 2025-04-05 12:21:02.948555 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-05 12:21:02.949548 | orchestrator | Saturday 05 April 2025 12:21:02 +0000 (0:00:00.610) 0:00:39.164 ******** 2025-04-05 12:21:03.532897 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:21:03.533540 | orchestrator | 2025-04-05 12:21:03.534513 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-05 12:21:03.535239 | orchestrator | Saturday 05 April 2025 12:21:03 +0000 (0:00:00.587) 0:00:39.752 ******** 2025-04-05 12:21:04.085481 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:21:04.086388 | orchestrator | 2025-04-05 12:21:04.087447 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-05 12:21:04.088121 | orchestrator | Saturday 05 April 2025 12:21:04 +0000 (0:00:00.551) 0:00:40.304 ******** 2025-04-05 12:21:04.219560 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:21:04.220052 | orchestrator | 2025-04-05 12:21:04.220890 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-05 12:21:04.221992 | orchestrator | Saturday 05 April 2025 12:21:04 +0000 (0:00:00.135) 0:00:40.439 ******** 2025-04-05 12:21:04.311926 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:04.312702 | orchestrator | 2025-04-05 12:21:04.314074 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-05 12:21:04.314824 | orchestrator | Saturday 05 April 2025 12:21:04 +0000 (0:00:00.092) 0:00:40.532 ******** 2025-04-05 12:21:04.406995 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:04.407870 | orchestrator | 2025-04-05 12:21:04.408572 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-05 12:21:04.409477 | orchestrator | Saturday 05 April 2025 12:21:04 +0000 (0:00:00.094) 0:00:40.626 ******** 2025-04-05 12:21:04.542625 | orchestrator | ok: [testbed-node-4] => { 2025-04-05 12:21:04.543292 | orchestrator |  "vgs_report": { 2025-04-05 12:21:04.544348 | orchestrator |  "vg": [] 2025-04-05 12:21:04.545321 | orchestrator |  } 2025-04-05 12:21:04.546176 | orchestrator | } 2025-04-05 12:21:04.546683 | orchestrator | 2025-04-05 12:21:04.547431 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-05 12:21:04.548131 | orchestrator | Saturday 05 April 2025 12:21:04 +0000 (0:00:00.135) 0:00:40.762 ******** 2025-04-05 12:21:04.661505 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:04.662251 | orchestrator | 2025-04-05 12:21:04.662604 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-05 12:21:04.662680 | orchestrator | Saturday 05 April 2025 12:21:04 +0000 (0:00:00.117) 0:00:40.879 ******** 2025-04-05 12:21:04.797051 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:04.797535 | orchestrator | 2025-04-05 12:21:04.798324 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-05 12:21:04.798566 | orchestrator | Saturday 05 April 2025 12:21:04 +0000 (0:00:00.137) 0:00:41.016 ******** 2025-04-05 12:21:04.927483 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:04.928032 | orchestrator | 2025-04-05 12:21:04.928511 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-05 12:21:04.929273 | orchestrator | Saturday 05 April 2025 12:21:04 +0000 (0:00:00.131) 0:00:41.147 ******** 2025-04-05 12:21:05.060162 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:05.060569 | orchestrator | 2025-04-05 12:21:05.061256 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-05 12:21:05.062154 | orchestrator | Saturday 05 April 2025 12:21:05 +0000 (0:00:00.130) 0:00:41.278 ******** 2025-04-05 12:21:05.292759 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:05.293653 | orchestrator | 2025-04-05 12:21:05.293698 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-05 12:21:05.295837 | orchestrator | Saturday 05 April 2025 12:21:05 +0000 (0:00:00.233) 0:00:41.512 ******** 2025-04-05 12:21:05.424413 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:05.424617 | orchestrator | 2025-04-05 12:21:05.425505 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-05 12:21:05.426125 | orchestrator | Saturday 05 April 2025 12:21:05 +0000 (0:00:00.131) 0:00:41.644 ******** 2025-04-05 12:21:05.560993 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:05.681286 | orchestrator | 2025-04-05 12:21:05.681369 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-05 12:21:05.681388 | orchestrator | Saturday 05 April 2025 12:21:05 +0000 (0:00:00.134) 0:00:41.778 ******** 2025-04-05 12:21:05.681416 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:05.682419 | orchestrator | 2025-04-05 12:21:05.683842 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-05 12:21:05.684035 | orchestrator | Saturday 05 April 2025 12:21:05 +0000 (0:00:00.122) 0:00:41.901 ******** 2025-04-05 12:21:05.799539 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:05.800342 | orchestrator | 2025-04-05 12:21:05.800910 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-05 12:21:05.801587 | orchestrator | Saturday 05 April 2025 12:21:05 +0000 (0:00:00.118) 0:00:42.019 ******** 2025-04-05 12:21:05.930941 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:05.931230 | orchestrator | 2025-04-05 12:21:05.931692 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-05 12:21:05.932166 | orchestrator | Saturday 05 April 2025 12:21:05 +0000 (0:00:00.131) 0:00:42.151 ******** 2025-04-05 12:21:06.068779 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:06.069317 | orchestrator | 2025-04-05 12:21:06.069780 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-05 12:21:06.070540 | orchestrator | Saturday 05 April 2025 12:21:06 +0000 (0:00:00.136) 0:00:42.287 ******** 2025-04-05 12:21:06.209479 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:06.209613 | orchestrator | 2025-04-05 12:21:06.209637 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-05 12:21:06.324314 | orchestrator | Saturday 05 April 2025 12:21:06 +0000 (0:00:00.140) 0:00:42.428 ******** 2025-04-05 12:21:06.324369 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:06.324464 | orchestrator | 2025-04-05 12:21:06.324966 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-05 12:21:06.325450 | orchestrator | Saturday 05 April 2025 12:21:06 +0000 (0:00:00.116) 0:00:42.544 ******** 2025-04-05 12:21:06.469102 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:06.469343 | orchestrator | 2025-04-05 12:21:06.469909 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-05 12:21:06.470235 | orchestrator | Saturday 05 April 2025 12:21:06 +0000 (0:00:00.144) 0:00:42.689 ******** 2025-04-05 12:21:06.623422 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'})  2025-04-05 12:21:06.623555 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'})  2025-04-05 12:21:06.623769 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:06.624122 | orchestrator | 2025-04-05 12:21:06.624462 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-05 12:21:06.626643 | orchestrator | Saturday 05 April 2025 12:21:06 +0000 (0:00:00.153) 0:00:42.842 ******** 2025-04-05 12:21:06.785202 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'})  2025-04-05 12:21:06.785813 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'})  2025-04-05 12:21:06.786417 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:06.787168 | orchestrator | 2025-04-05 12:21:06.787813 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-05 12:21:06.788705 | orchestrator | Saturday 05 April 2025 12:21:06 +0000 (0:00:00.162) 0:00:43.004 ******** 2025-04-05 12:21:07.055248 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'})  2025-04-05 12:21:07.055910 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'})  2025-04-05 12:21:07.056377 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:07.057099 | orchestrator | 2025-04-05 12:21:07.058119 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-05 12:21:07.058602 | orchestrator | Saturday 05 April 2025 12:21:07 +0000 (0:00:00.269) 0:00:43.273 ******** 2025-04-05 12:21:07.210725 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'})  2025-04-05 12:21:07.211615 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'})  2025-04-05 12:21:07.212291 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:07.213258 | orchestrator | 2025-04-05 12:21:07.213987 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-05 12:21:07.214671 | orchestrator | Saturday 05 April 2025 12:21:07 +0000 (0:00:00.154) 0:00:43.428 ******** 2025-04-05 12:21:07.367117 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'})  2025-04-05 12:21:07.367660 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'})  2025-04-05 12:21:07.367770 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:07.368614 | orchestrator | 2025-04-05 12:21:07.368905 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-05 12:21:07.370455 | orchestrator | Saturday 05 April 2025 12:21:07 +0000 (0:00:00.158) 0:00:43.586 ******** 2025-04-05 12:21:07.502393 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'})  2025-04-05 12:21:07.503578 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'})  2025-04-05 12:21:07.503922 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:07.504923 | orchestrator | 2025-04-05 12:21:07.505579 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-05 12:21:07.505944 | orchestrator | Saturday 05 April 2025 12:21:07 +0000 (0:00:00.135) 0:00:43.722 ******** 2025-04-05 12:21:07.667459 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'})  2025-04-05 12:21:07.667760 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'})  2025-04-05 12:21:07.668681 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:07.669313 | orchestrator | 2025-04-05 12:21:07.670214 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-05 12:21:07.670943 | orchestrator | Saturday 05 April 2025 12:21:07 +0000 (0:00:00.164) 0:00:43.887 ******** 2025-04-05 12:21:07.820636 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'})  2025-04-05 12:21:07.823426 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'})  2025-04-05 12:21:07.823455 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:07.823900 | orchestrator | 2025-04-05 12:21:07.824488 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-05 12:21:07.824814 | orchestrator | Saturday 05 April 2025 12:21:07 +0000 (0:00:00.153) 0:00:44.040 ******** 2025-04-05 12:21:08.362555 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:21:08.363465 | orchestrator | 2025-04-05 12:21:08.364178 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-05 12:21:08.364903 | orchestrator | Saturday 05 April 2025 12:21:08 +0000 (0:00:00.541) 0:00:44.581 ******** 2025-04-05 12:21:08.882906 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:21:08.883525 | orchestrator | 2025-04-05 12:21:08.884320 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-05 12:21:08.884760 | orchestrator | Saturday 05 April 2025 12:21:08 +0000 (0:00:00.520) 0:00:45.102 ******** 2025-04-05 12:21:09.021771 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:21:09.021985 | orchestrator | 2025-04-05 12:21:09.022404 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-05 12:21:09.023398 | orchestrator | Saturday 05 April 2025 12:21:09 +0000 (0:00:00.137) 0:00:45.240 ******** 2025-04-05 12:21:09.193399 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'vg_name': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'}) 2025-04-05 12:21:09.194043 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'vg_name': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'}) 2025-04-05 12:21:09.196000 | orchestrator | 2025-04-05 12:21:09.196589 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-05 12:21:09.196611 | orchestrator | Saturday 05 April 2025 12:21:09 +0000 (0:00:00.172) 0:00:45.413 ******** 2025-04-05 12:21:09.339867 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'})  2025-04-05 12:21:09.341037 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'})  2025-04-05 12:21:09.342232 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:09.343301 | orchestrator | 2025-04-05 12:21:09.344197 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-05 12:21:09.345392 | orchestrator | Saturday 05 April 2025 12:21:09 +0000 (0:00:00.145) 0:00:45.558 ******** 2025-04-05 12:21:09.603731 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'})  2025-04-05 12:21:09.604331 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'})  2025-04-05 12:21:09.604365 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:09.604388 | orchestrator | 2025-04-05 12:21:09.605108 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-05 12:21:09.605891 | orchestrator | Saturday 05 April 2025 12:21:09 +0000 (0:00:00.262) 0:00:45.820 ******** 2025-04-05 12:21:09.754102 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'})  2025-04-05 12:21:09.754611 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'})  2025-04-05 12:21:09.755385 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:09.756014 | orchestrator | 2025-04-05 12:21:09.756638 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-05 12:21:09.757211 | orchestrator | Saturday 05 April 2025 12:21:09 +0000 (0:00:00.152) 0:00:45.973 ******** 2025-04-05 12:21:10.372771 | orchestrator | ok: [testbed-node-4] => { 2025-04-05 12:21:10.373010 | orchestrator |  "lvm_report": { 2025-04-05 12:21:10.375352 | orchestrator |  "lv": [ 2025-04-05 12:21:10.376307 | orchestrator |  { 2025-04-05 12:21:10.376960 | orchestrator |  "lv_name": "osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099", 2025-04-05 12:21:10.377760 | orchestrator |  "vg_name": "ceph-bddbd264-0785-5bf3-9ea2-553c515bd099" 2025-04-05 12:21:10.378418 | orchestrator |  }, 2025-04-05 12:21:10.378885 | orchestrator |  { 2025-04-05 12:21:10.379270 | orchestrator |  "lv_name": "osd-block-eb474160-46dc-5c48-a12b-143126b3371a", 2025-04-05 12:21:10.379653 | orchestrator |  "vg_name": "ceph-eb474160-46dc-5c48-a12b-143126b3371a" 2025-04-05 12:21:10.380045 | orchestrator |  } 2025-04-05 12:21:10.380411 | orchestrator |  ], 2025-04-05 12:21:10.380765 | orchestrator |  "pv": [ 2025-04-05 12:21:10.381225 | orchestrator |  { 2025-04-05 12:21:10.381530 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-05 12:21:10.381859 | orchestrator |  "vg_name": "ceph-eb474160-46dc-5c48-a12b-143126b3371a" 2025-04-05 12:21:10.382284 | orchestrator |  }, 2025-04-05 12:21:10.382636 | orchestrator |  { 2025-04-05 12:21:10.382939 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-05 12:21:10.383277 | orchestrator |  "vg_name": "ceph-bddbd264-0785-5bf3-9ea2-553c515bd099" 2025-04-05 12:21:10.383692 | orchestrator |  } 2025-04-05 12:21:10.383964 | orchestrator |  ] 2025-04-05 12:21:10.384308 | orchestrator |  } 2025-04-05 12:21:10.384644 | orchestrator | } 2025-04-05 12:21:10.384976 | orchestrator | 2025-04-05 12:21:10.385336 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-05 12:21:10.385684 | orchestrator | 2025-04-05 12:21:10.386012 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-05 12:21:10.386391 | orchestrator | Saturday 05 April 2025 12:21:10 +0000 (0:00:00.617) 0:00:46.590 ******** 2025-04-05 12:21:10.726321 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-05 12:21:10.726638 | orchestrator | 2025-04-05 12:21:10.727387 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-05 12:21:10.729152 | orchestrator | Saturday 05 April 2025 12:21:10 +0000 (0:00:00.355) 0:00:46.946 ******** 2025-04-05 12:21:11.070081 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:21:11.071420 | orchestrator | 2025-04-05 12:21:11.071862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:21:11.071904 | orchestrator | Saturday 05 April 2025 12:21:11 +0000 (0:00:00.343) 0:00:47.289 ******** 2025-04-05 12:21:11.491356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-04-05 12:21:11.492350 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-04-05 12:21:11.495011 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-04-05 12:21:11.496094 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-04-05 12:21:11.496133 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-04-05 12:21:11.496939 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-04-05 12:21:11.497379 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-04-05 12:21:11.497488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-04-05 12:21:11.498435 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-04-05 12:21:11.498581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-04-05 12:21:11.498607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-04-05 12:21:11.498870 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-04-05 12:21:11.499318 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-04-05 12:21:11.499893 | orchestrator | 2025-04-05 12:21:11.500062 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:21:11.500235 | orchestrator | Saturday 05 April 2025 12:21:11 +0000 (0:00:00.421) 0:00:47.711 ******** 2025-04-05 12:21:11.675749 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:11.852677 | orchestrator | 2025-04-05 12:21:11.852735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:21:11.852751 | orchestrator | Saturday 05 April 2025 12:21:11 +0000 (0:00:00.181) 0:00:47.892 ******** 2025-04-05 12:21:11.852775 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:11.853327 | orchestrator | 2025-04-05 12:21:11.854252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:21:11.855341 | orchestrator | Saturday 05 April 2025 12:21:11 +0000 (0:00:00.179) 0:00:48.072 ******** 2025-04-05 12:21:12.045525 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:12.046215 | orchestrator | 2025-04-05 12:21:12.047382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:21:12.048467 | orchestrator | Saturday 05 April 2025 12:21:12 +0000 (0:00:00.192) 0:00:48.265 ******** 2025-04-05 12:21:12.231847 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:12.233759 | orchestrator | 2025-04-05 12:21:12.235225 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:21:12.236085 | orchestrator | Saturday 05 April 2025 12:21:12 +0000 (0:00:00.184) 0:00:48.449 ******** 2025-04-05 12:21:12.425651 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:12.426615 | orchestrator | 2025-04-05 12:21:12.427678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:21:12.428092 | orchestrator | Saturday 05 April 2025 12:21:12 +0000 (0:00:00.195) 0:00:48.645 ******** 2025-04-05 12:21:12.604205 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:12.604362 | orchestrator | 2025-04-05 12:21:12.605092 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:21:12.605946 | orchestrator | Saturday 05 April 2025 12:21:12 +0000 (0:00:00.178) 0:00:48.823 ******** 2025-04-05 12:21:12.768198 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:12.769353 | orchestrator | 2025-04-05 12:21:12.771034 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:21:12.945053 | orchestrator | Saturday 05 April 2025 12:21:12 +0000 (0:00:00.164) 0:00:48.988 ******** 2025-04-05 12:21:12.945137 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:12.946327 | orchestrator | 2025-04-05 12:21:12.947035 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:21:12.948222 | orchestrator | Saturday 05 April 2025 12:21:12 +0000 (0:00:00.176) 0:00:49.164 ******** 2025-04-05 12:21:13.457206 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f) 2025-04-05 12:21:13.457655 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f) 2025-04-05 12:21:13.457689 | orchestrator | 2025-04-05 12:21:13.458422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:21:13.837767 | orchestrator | Saturday 05 April 2025 12:21:13 +0000 (0:00:00.511) 0:00:49.676 ******** 2025-04-05 12:21:13.837902 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3319eb17-1f94-4384-b4eb-d4656240927c) 2025-04-05 12:21:13.837975 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3319eb17-1f94-4384-b4eb-d4656240927c) 2025-04-05 12:21:13.838357 | orchestrator | 2025-04-05 12:21:13.838879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:21:13.839573 | orchestrator | Saturday 05 April 2025 12:21:13 +0000 (0:00:00.381) 0:00:50.057 ******** 2025-04-05 12:21:14.231183 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1b7be43a-8a0c-4734-8b26-2b6a058e961f) 2025-04-05 12:21:14.231604 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1b7be43a-8a0c-4734-8b26-2b6a058e961f) 2025-04-05 12:21:14.232094 | orchestrator | 2025-04-05 12:21:14.232989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:21:14.233718 | orchestrator | Saturday 05 April 2025 12:21:14 +0000 (0:00:00.391) 0:00:50.449 ******** 2025-04-05 12:21:14.634092 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_af9ec2c6-8790-4d7b-8704-1ac1d2bb5c9f) 2025-04-05 12:21:14.634516 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_af9ec2c6-8790-4d7b-8704-1ac1d2bb5c9f) 2025-04-05 12:21:14.635125 | orchestrator | 2025-04-05 12:21:14.635981 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-05 12:21:14.636710 | orchestrator | Saturday 05 April 2025 12:21:14 +0000 (0:00:00.404) 0:00:50.854 ******** 2025-04-05 12:21:14.936612 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-05 12:21:14.937426 | orchestrator | 2025-04-05 12:21:14.937474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:21:14.938175 | orchestrator | Saturday 05 April 2025 12:21:14 +0000 (0:00:00.302) 0:00:51.156 ******** 2025-04-05 12:21:15.339972 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-04-05 12:21:15.340838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-04-05 12:21:15.342091 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-04-05 12:21:15.343571 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-04-05 12:21:15.344258 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-04-05 12:21:15.345162 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-04-05 12:21:15.345613 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-04-05 12:21:15.346125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-04-05 12:21:15.346758 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-04-05 12:21:15.347115 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-04-05 12:21:15.347563 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-04-05 12:21:15.347994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-04-05 12:21:15.348499 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-04-05 12:21:15.348888 | orchestrator | 2025-04-05 12:21:15.349337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:21:15.349712 | orchestrator | Saturday 05 April 2025 12:21:15 +0000 (0:00:00.400) 0:00:51.557 ******** 2025-04-05 12:21:15.528953 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:15.529412 | orchestrator | 2025-04-05 12:21:15.529513 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:21:15.529631 | orchestrator | Saturday 05 April 2025 12:21:15 +0000 (0:00:00.186) 0:00:51.744 ******** 2025-04-05 12:21:15.711819 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:15.712966 | orchestrator | 2025-04-05 12:21:15.713245 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:21:15.713709 | orchestrator | Saturday 05 April 2025 12:21:15 +0000 (0:00:00.187) 0:00:51.932 ******** 2025-04-05 12:21:15.897348 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:15.897514 | orchestrator | 2025-04-05 12:21:15.897961 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:21:15.898657 | orchestrator | Saturday 05 April 2025 12:21:15 +0000 (0:00:00.184) 0:00:52.116 ******** 2025-04-05 12:21:16.331116 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:16.331468 | orchestrator | 2025-04-05 12:21:16.332833 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:21:16.333216 | orchestrator | Saturday 05 April 2025 12:21:16 +0000 (0:00:00.431) 0:00:52.548 ******** 2025-04-05 12:21:16.519573 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:16.519710 | orchestrator | 2025-04-05 12:21:16.522060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:21:16.702808 | orchestrator | Saturday 05 April 2025 12:21:16 +0000 (0:00:00.190) 0:00:52.739 ******** 2025-04-05 12:21:16.702899 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:16.702968 | orchestrator | 2025-04-05 12:21:16.703754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:21:16.704247 | orchestrator | Saturday 05 April 2025 12:21:16 +0000 (0:00:00.182) 0:00:52.922 ******** 2025-04-05 12:21:16.895700 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:16.895892 | orchestrator | 2025-04-05 12:21:16.896654 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:21:16.898329 | orchestrator | Saturday 05 April 2025 12:21:16 +0000 (0:00:00.193) 0:00:53.115 ******** 2025-04-05 12:21:17.068501 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:17.069036 | orchestrator | 2025-04-05 12:21:17.069454 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:21:17.071723 | orchestrator | Saturday 05 April 2025 12:21:17 +0000 (0:00:00.172) 0:00:53.288 ******** 2025-04-05 12:21:17.657669 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-04-05 12:21:17.658496 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-04-05 12:21:17.659449 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-04-05 12:21:17.659900 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-04-05 12:21:17.661706 | orchestrator | 2025-04-05 12:21:17.662446 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:21:17.662874 | orchestrator | Saturday 05 April 2025 12:21:17 +0000 (0:00:00.588) 0:00:53.877 ******** 2025-04-05 12:21:17.839446 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:17.839994 | orchestrator | 2025-04-05 12:21:17.841864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:21:17.842366 | orchestrator | Saturday 05 April 2025 12:21:17 +0000 (0:00:00.182) 0:00:54.059 ******** 2025-04-05 12:21:18.021362 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:18.021585 | orchestrator | 2025-04-05 12:21:18.022186 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:21:18.022832 | orchestrator | Saturday 05 April 2025 12:21:18 +0000 (0:00:00.179) 0:00:54.239 ******** 2025-04-05 12:21:18.203054 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:18.203649 | orchestrator | 2025-04-05 12:21:18.204680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-05 12:21:18.206006 | orchestrator | Saturday 05 April 2025 12:21:18 +0000 (0:00:00.182) 0:00:54.422 ******** 2025-04-05 12:21:18.382611 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:18.382923 | orchestrator | 2025-04-05 12:21:18.383734 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-05 12:21:18.384492 | orchestrator | Saturday 05 April 2025 12:21:18 +0000 (0:00:00.180) 0:00:54.602 ******** 2025-04-05 12:21:18.624999 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:18.625664 | orchestrator | 2025-04-05 12:21:18.626404 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-05 12:21:18.627749 | orchestrator | Saturday 05 April 2025 12:21:18 +0000 (0:00:00.241) 0:00:54.844 ******** 2025-04-05 12:21:18.834871 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4aac11a6-844c-526d-9ac8-c50cbafa4162'}}) 2025-04-05 12:21:18.837253 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7b2d6610-beab-5485-bcb7-dfee77450e0c'}}) 2025-04-05 12:21:18.837288 | orchestrator | 2025-04-05 12:21:18.837731 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-05 12:21:18.837969 | orchestrator | Saturday 05 April 2025 12:21:18 +0000 (0:00:00.208) 0:00:55.052 ******** 2025-04-05 12:21:20.653164 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'}) 2025-04-05 12:21:20.653988 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'}) 2025-04-05 12:21:20.655776 | orchestrator | 2025-04-05 12:21:20.656687 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-05 12:21:20.657157 | orchestrator | Saturday 05 April 2025 12:21:20 +0000 (0:00:01.819) 0:00:56.871 ******** 2025-04-05 12:21:20.814104 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'})  2025-04-05 12:21:20.814297 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'})  2025-04-05 12:21:20.814918 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:20.815675 | orchestrator | 2025-04-05 12:21:20.817629 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-05 12:21:20.818616 | orchestrator | Saturday 05 April 2025 12:21:20 +0000 (0:00:00.162) 0:00:57.034 ******** 2025-04-05 12:21:22.187579 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'}) 2025-04-05 12:21:22.188254 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'}) 2025-04-05 12:21:22.189074 | orchestrator | 2025-04-05 12:21:22.189706 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-05 12:21:22.190229 | orchestrator | Saturday 05 April 2025 12:21:22 +0000 (0:00:01.371) 0:00:58.406 ******** 2025-04-05 12:21:22.348487 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'})  2025-04-05 12:21:22.348679 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'})  2025-04-05 12:21:22.349053 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:22.349089 | orchestrator | 2025-04-05 12:21:22.349929 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-05 12:21:22.350121 | orchestrator | Saturday 05 April 2025 12:21:22 +0000 (0:00:00.162) 0:00:58.569 ******** 2025-04-05 12:21:22.489573 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:22.490131 | orchestrator | 2025-04-05 12:21:22.492168 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-05 12:21:22.492474 | orchestrator | Saturday 05 April 2025 12:21:22 +0000 (0:00:00.140) 0:00:58.709 ******** 2025-04-05 12:21:22.644392 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'})  2025-04-05 12:21:22.644540 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'})  2025-04-05 12:21:22.644850 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:22.645756 | orchestrator | 2025-04-05 12:21:22.647465 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-05 12:21:22.777845 | orchestrator | Saturday 05 April 2025 12:21:22 +0000 (0:00:00.155) 0:00:58.864 ******** 2025-04-05 12:21:22.777910 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:22.779560 | orchestrator | 2025-04-05 12:21:22.780972 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-05 12:21:22.781000 | orchestrator | Saturday 05 April 2025 12:21:22 +0000 (0:00:00.132) 0:00:58.996 ******** 2025-04-05 12:21:22.932827 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'})  2025-04-05 12:21:22.933641 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'})  2025-04-05 12:21:22.934124 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:22.934927 | orchestrator | 2025-04-05 12:21:22.935546 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-05 12:21:22.936209 | orchestrator | Saturday 05 April 2025 12:21:22 +0000 (0:00:00.155) 0:00:59.152 ******** 2025-04-05 12:21:23.221875 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:23.222352 | orchestrator | 2025-04-05 12:21:23.222508 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-05 12:21:23.222587 | orchestrator | Saturday 05 April 2025 12:21:23 +0000 (0:00:00.287) 0:00:59.440 ******** 2025-04-05 12:21:23.379386 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'})  2025-04-05 12:21:23.379563 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'})  2025-04-05 12:21:23.380095 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:23.380388 | orchestrator | 2025-04-05 12:21:23.381538 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-05 12:21:23.381751 | orchestrator | Saturday 05 April 2025 12:21:23 +0000 (0:00:00.159) 0:00:59.599 ******** 2025-04-05 12:21:23.516762 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:21:23.517160 | orchestrator | 2025-04-05 12:21:23.518898 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-05 12:21:23.671124 | orchestrator | Saturday 05 April 2025 12:21:23 +0000 (0:00:00.136) 0:00:59.736 ******** 2025-04-05 12:21:23.671216 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'})  2025-04-05 12:21:23.671823 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'})  2025-04-05 12:21:23.672266 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:23.672732 | orchestrator | 2025-04-05 12:21:23.673447 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-05 12:21:23.673815 | orchestrator | Saturday 05 April 2025 12:21:23 +0000 (0:00:00.155) 0:00:59.891 ******** 2025-04-05 12:21:23.824807 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'})  2025-04-05 12:21:23.826342 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'})  2025-04-05 12:21:23.826440 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:23.826464 | orchestrator | 2025-04-05 12:21:23.827243 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-05 12:21:23.827909 | orchestrator | Saturday 05 April 2025 12:21:23 +0000 (0:00:00.152) 0:01:00.043 ******** 2025-04-05 12:21:23.966944 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'})  2025-04-05 12:21:23.967684 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'})  2025-04-05 12:21:23.967733 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:23.968565 | orchestrator | 2025-04-05 12:21:23.969051 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-05 12:21:23.969561 | orchestrator | Saturday 05 April 2025 12:21:23 +0000 (0:00:00.141) 0:01:00.184 ******** 2025-04-05 12:21:24.081714 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:24.082108 | orchestrator | 2025-04-05 12:21:24.082814 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-05 12:21:24.083297 | orchestrator | Saturday 05 April 2025 12:21:24 +0000 (0:00:00.116) 0:01:00.301 ******** 2025-04-05 12:21:24.212117 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:24.212894 | orchestrator | 2025-04-05 12:21:24.214149 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-05 12:21:24.214407 | orchestrator | Saturday 05 April 2025 12:21:24 +0000 (0:00:00.130) 0:01:00.432 ******** 2025-04-05 12:21:24.344570 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:24.345502 | orchestrator | 2025-04-05 12:21:24.345953 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-05 12:21:24.347236 | orchestrator | Saturday 05 April 2025 12:21:24 +0000 (0:00:00.131) 0:01:00.563 ******** 2025-04-05 12:21:24.478313 | orchestrator | ok: [testbed-node-5] => { 2025-04-05 12:21:24.478851 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-05 12:21:24.479855 | orchestrator | } 2025-04-05 12:21:24.480427 | orchestrator | 2025-04-05 12:21:24.481220 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-05 12:21:24.481994 | orchestrator | Saturday 05 April 2025 12:21:24 +0000 (0:00:00.134) 0:01:00.697 ******** 2025-04-05 12:21:24.606689 | orchestrator | ok: [testbed-node-5] => { 2025-04-05 12:21:24.607532 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-05 12:21:24.608485 | orchestrator | } 2025-04-05 12:21:24.609405 | orchestrator | 2025-04-05 12:21:24.610061 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-05 12:21:24.610721 | orchestrator | Saturday 05 April 2025 12:21:24 +0000 (0:00:00.128) 0:01:00.826 ******** 2025-04-05 12:21:24.878272 | orchestrator | ok: [testbed-node-5] => { 2025-04-05 12:21:24.878954 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-05 12:21:24.879767 | orchestrator | } 2025-04-05 12:21:24.880599 | orchestrator | 2025-04-05 12:21:24.881281 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-05 12:21:24.881899 | orchestrator | Saturday 05 April 2025 12:21:24 +0000 (0:00:00.269) 0:01:01.096 ******** 2025-04-05 12:21:25.420905 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:21:25.421738 | orchestrator | 2025-04-05 12:21:25.422084 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-05 12:21:25.422995 | orchestrator | Saturday 05 April 2025 12:21:25 +0000 (0:00:00.544) 0:01:01.640 ******** 2025-04-05 12:21:25.940527 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:21:26.503727 | orchestrator | 2025-04-05 12:21:26.503888 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-05 12:21:26.503908 | orchestrator | Saturday 05 April 2025 12:21:25 +0000 (0:00:00.517) 0:01:02.158 ******** 2025-04-05 12:21:26.503937 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:21:26.504012 | orchestrator | 2025-04-05 12:21:26.504964 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-05 12:21:26.505870 | orchestrator | Saturday 05 April 2025 12:21:26 +0000 (0:00:00.564) 0:01:02.722 ******** 2025-04-05 12:21:26.674360 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:21:26.675586 | orchestrator | 2025-04-05 12:21:26.676026 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-05 12:21:26.677611 | orchestrator | Saturday 05 April 2025 12:21:26 +0000 (0:00:00.170) 0:01:02.893 ******** 2025-04-05 12:21:26.777150 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:26.778293 | orchestrator | 2025-04-05 12:21:26.778921 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-05 12:21:26.780515 | orchestrator | Saturday 05 April 2025 12:21:26 +0000 (0:00:00.102) 0:01:02.996 ******** 2025-04-05 12:21:26.894774 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:26.895217 | orchestrator | 2025-04-05 12:21:26.895250 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-05 12:21:26.895845 | orchestrator | Saturday 05 April 2025 12:21:26 +0000 (0:00:00.116) 0:01:03.113 ******** 2025-04-05 12:21:27.045217 | orchestrator | ok: [testbed-node-5] => { 2025-04-05 12:21:27.046098 | orchestrator |  "vgs_report": { 2025-04-05 12:21:27.046132 | orchestrator |  "vg": [] 2025-04-05 12:21:27.047627 | orchestrator |  } 2025-04-05 12:21:27.048273 | orchestrator | } 2025-04-05 12:21:27.049540 | orchestrator | 2025-04-05 12:21:27.050703 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-05 12:21:27.052456 | orchestrator | Saturday 05 April 2025 12:21:27 +0000 (0:00:00.150) 0:01:03.263 ******** 2025-04-05 12:21:27.199502 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:27.200018 | orchestrator | 2025-04-05 12:21:27.200771 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-05 12:21:27.201587 | orchestrator | Saturday 05 April 2025 12:21:27 +0000 (0:00:00.155) 0:01:03.419 ******** 2025-04-05 12:21:27.337922 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:27.339010 | orchestrator | 2025-04-05 12:21:27.339910 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-05 12:21:27.340920 | orchestrator | Saturday 05 April 2025 12:21:27 +0000 (0:00:00.138) 0:01:03.557 ******** 2025-04-05 12:21:27.474975 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:27.476657 | orchestrator | 2025-04-05 12:21:27.476973 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-05 12:21:27.477002 | orchestrator | Saturday 05 April 2025 12:21:27 +0000 (0:00:00.136) 0:01:03.694 ******** 2025-04-05 12:21:27.778483 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:27.779340 | orchestrator | 2025-04-05 12:21:27.780429 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-05 12:21:27.782142 | orchestrator | Saturday 05 April 2025 12:21:27 +0000 (0:00:00.302) 0:01:03.996 ******** 2025-04-05 12:21:27.924033 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:27.925132 | orchestrator | 2025-04-05 12:21:27.925739 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-05 12:21:27.926541 | orchestrator | Saturday 05 April 2025 12:21:27 +0000 (0:00:00.147) 0:01:04.143 ******** 2025-04-05 12:21:28.071754 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:28.072247 | orchestrator | 2025-04-05 12:21:28.073308 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-05 12:21:28.074185 | orchestrator | Saturday 05 April 2025 12:21:28 +0000 (0:00:00.147) 0:01:04.291 ******** 2025-04-05 12:21:28.217495 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:28.218103 | orchestrator | 2025-04-05 12:21:28.219626 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-05 12:21:28.219694 | orchestrator | Saturday 05 April 2025 12:21:28 +0000 (0:00:00.144) 0:01:04.435 ******** 2025-04-05 12:21:28.361886 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:28.362593 | orchestrator | 2025-04-05 12:21:28.363576 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-05 12:21:28.363982 | orchestrator | Saturday 05 April 2025 12:21:28 +0000 (0:00:00.145) 0:01:04.581 ******** 2025-04-05 12:21:28.495236 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:28.496035 | orchestrator | 2025-04-05 12:21:28.496635 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-05 12:21:28.499180 | orchestrator | Saturday 05 April 2025 12:21:28 +0000 (0:00:00.133) 0:01:04.714 ******** 2025-04-05 12:21:28.637189 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:28.638727 | orchestrator | 2025-04-05 12:21:28.639217 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-05 12:21:28.640230 | orchestrator | Saturday 05 April 2025 12:21:28 +0000 (0:00:00.142) 0:01:04.856 ******** 2025-04-05 12:21:28.795825 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:28.798943 | orchestrator | 2025-04-05 12:21:28.938006 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-05 12:21:28.938112 | orchestrator | Saturday 05 April 2025 12:21:28 +0000 (0:00:00.156) 0:01:05.013 ******** 2025-04-05 12:21:28.938138 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:28.939734 | orchestrator | 2025-04-05 12:21:28.940442 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-05 12:21:28.940475 | orchestrator | Saturday 05 April 2025 12:21:28 +0000 (0:00:00.143) 0:01:05.157 ******** 2025-04-05 12:21:29.088052 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:29.088891 | orchestrator | 2025-04-05 12:21:29.089963 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-05 12:21:29.090834 | orchestrator | Saturday 05 April 2025 12:21:29 +0000 (0:00:00.150) 0:01:05.307 ******** 2025-04-05 12:21:29.223406 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:29.224053 | orchestrator | 2025-04-05 12:21:29.225146 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-05 12:21:29.225885 | orchestrator | Saturday 05 April 2025 12:21:29 +0000 (0:00:00.135) 0:01:05.443 ******** 2025-04-05 12:21:29.400946 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'})  2025-04-05 12:21:29.401056 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'})  2025-04-05 12:21:29.402690 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:29.404039 | orchestrator | 2025-04-05 12:21:29.404258 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-05 12:21:29.405208 | orchestrator | Saturday 05 April 2025 12:21:29 +0000 (0:00:00.177) 0:01:05.620 ******** 2025-04-05 12:21:29.749929 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'})  2025-04-05 12:21:29.750165 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'})  2025-04-05 12:21:29.751230 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:29.751994 | orchestrator | 2025-04-05 12:21:29.752821 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-05 12:21:29.755282 | orchestrator | Saturday 05 April 2025 12:21:29 +0000 (0:00:00.348) 0:01:05.969 ******** 2025-04-05 12:21:29.936910 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'})  2025-04-05 12:21:29.937764 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'})  2025-04-05 12:21:29.937821 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:29.939981 | orchestrator | 2025-04-05 12:21:30.113561 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-05 12:21:30.113652 | orchestrator | Saturday 05 April 2025 12:21:29 +0000 (0:00:00.185) 0:01:06.154 ******** 2025-04-05 12:21:30.113682 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'})  2025-04-05 12:21:30.113738 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'})  2025-04-05 12:21:30.114778 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:30.115892 | orchestrator | 2025-04-05 12:21:30.116239 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-05 12:21:30.117653 | orchestrator | Saturday 05 April 2025 12:21:30 +0000 (0:00:00.177) 0:01:06.332 ******** 2025-04-05 12:21:30.284161 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'})  2025-04-05 12:21:30.285052 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'})  2025-04-05 12:21:30.285620 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:30.286894 | orchestrator | 2025-04-05 12:21:30.287905 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-05 12:21:30.288991 | orchestrator | Saturday 05 April 2025 12:21:30 +0000 (0:00:00.170) 0:01:06.503 ******** 2025-04-05 12:21:30.456761 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'})  2025-04-05 12:21:30.458513 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'})  2025-04-05 12:21:30.460244 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:30.460693 | orchestrator | 2025-04-05 12:21:30.461500 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-05 12:21:30.462868 | orchestrator | Saturday 05 April 2025 12:21:30 +0000 (0:00:00.172) 0:01:06.675 ******** 2025-04-05 12:21:30.647828 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'})  2025-04-05 12:21:30.648118 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'})  2025-04-05 12:21:30.648151 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:30.650066 | orchestrator | 2025-04-05 12:21:30.653759 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-05 12:21:30.810770 | orchestrator | Saturday 05 April 2025 12:21:30 +0000 (0:00:00.191) 0:01:06.867 ******** 2025-04-05 12:21:30.810868 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'})  2025-04-05 12:21:30.812983 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'})  2025-04-05 12:21:30.813433 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:30.813638 | orchestrator | 2025-04-05 12:21:30.813666 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-05 12:21:30.813917 | orchestrator | Saturday 05 April 2025 12:21:30 +0000 (0:00:00.164) 0:01:07.031 ******** 2025-04-05 12:21:31.365137 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:21:31.365323 | orchestrator | 2025-04-05 12:21:31.365813 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-05 12:21:31.365906 | orchestrator | Saturday 05 April 2025 12:21:31 +0000 (0:00:00.552) 0:01:07.583 ******** 2025-04-05 12:21:31.896719 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:21:31.897805 | orchestrator | 2025-04-05 12:21:31.898688 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-05 12:21:31.899683 | orchestrator | Saturday 05 April 2025 12:21:31 +0000 (0:00:00.531) 0:01:08.115 ******** 2025-04-05 12:21:32.052566 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:21:32.052675 | orchestrator | 2025-04-05 12:21:32.055569 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-05 12:21:32.245037 | orchestrator | Saturday 05 April 2025 12:21:32 +0000 (0:00:00.154) 0:01:08.270 ******** 2025-04-05 12:21:32.245083 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'vg_name': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'}) 2025-04-05 12:21:32.245428 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'vg_name': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'}) 2025-04-05 12:21:32.246539 | orchestrator | 2025-04-05 12:21:32.247764 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-05 12:21:32.248834 | orchestrator | Saturday 05 April 2025 12:21:32 +0000 (0:00:00.193) 0:01:08.463 ******** 2025-04-05 12:21:32.585512 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'})  2025-04-05 12:21:32.586566 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'})  2025-04-05 12:21:32.587371 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:32.588437 | orchestrator | 2025-04-05 12:21:32.589108 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-05 12:21:32.590009 | orchestrator | Saturday 05 April 2025 12:21:32 +0000 (0:00:00.340) 0:01:08.804 ******** 2025-04-05 12:21:32.761963 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'})  2025-04-05 12:21:32.762715 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'})  2025-04-05 12:21:32.763513 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:32.764251 | orchestrator | 2025-04-05 12:21:32.764948 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-05 12:21:32.766263 | orchestrator | Saturday 05 April 2025 12:21:32 +0000 (0:00:00.177) 0:01:08.981 ******** 2025-04-05 12:21:32.937612 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'})  2025-04-05 12:21:32.938210 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'})  2025-04-05 12:21:32.939232 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:32.940536 | orchestrator | 2025-04-05 12:21:32.941367 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-05 12:21:32.942246 | orchestrator | Saturday 05 April 2025 12:21:32 +0000 (0:00:00.174) 0:01:09.156 ******** 2025-04-05 12:21:33.366826 | orchestrator | ok: [testbed-node-5] => { 2025-04-05 12:21:33.367609 | orchestrator |  "lvm_report": { 2025-04-05 12:21:33.367645 | orchestrator |  "lv": [ 2025-04-05 12:21:33.368629 | orchestrator |  { 2025-04-05 12:21:33.369702 | orchestrator |  "lv_name": "osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162", 2025-04-05 12:21:33.370763 | orchestrator |  "vg_name": "ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162" 2025-04-05 12:21:33.372217 | orchestrator |  }, 2025-04-05 12:21:33.373410 | orchestrator |  { 2025-04-05 12:21:33.373834 | orchestrator |  "lv_name": "osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c", 2025-04-05 12:21:33.374680 | orchestrator |  "vg_name": "ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c" 2025-04-05 12:21:33.375681 | orchestrator |  } 2025-04-05 12:21:33.376503 | orchestrator |  ], 2025-04-05 12:21:33.377358 | orchestrator |  "pv": [ 2025-04-05 12:21:33.378243 | orchestrator |  { 2025-04-05 12:21:33.378465 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-05 12:21:33.379640 | orchestrator |  "vg_name": "ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162" 2025-04-05 12:21:33.380504 | orchestrator |  }, 2025-04-05 12:21:33.380595 | orchestrator |  { 2025-04-05 12:21:33.381076 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-05 12:21:33.381896 | orchestrator |  "vg_name": "ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c" 2025-04-05 12:21:33.382298 | orchestrator |  } 2025-04-05 12:21:33.382993 | orchestrator |  ] 2025-04-05 12:21:33.383597 | orchestrator |  } 2025-04-05 12:21:33.384016 | orchestrator | } 2025-04-05 12:21:33.385646 | orchestrator | 2025-04-05 12:21:33.387343 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:21:33.387846 | orchestrator | 2025-04-05 12:21:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:21:33.388715 | orchestrator | 2025-04-05 12:21:33 | INFO  | Please wait and do not abort execution. 2025-04-05 12:21:33.388749 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-05 12:21:33.389701 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-05 12:21:33.390691 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-05 12:21:33.391371 | orchestrator | 2025-04-05 12:21:33.392932 | orchestrator | 2025-04-05 12:21:33.393743 | orchestrator | 2025-04-05 12:21:33.394423 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:21:33.395435 | orchestrator | Saturday 05 April 2025 12:21:33 +0000 (0:00:00.429) 0:01:09.585 ******** 2025-04-05 12:21:33.395962 | orchestrator | =============================================================================== 2025-04-05 12:21:33.396829 | orchestrator | Create block VGs -------------------------------------------------------- 5.59s 2025-04-05 12:21:33.397413 | orchestrator | Create block LVs -------------------------------------------------------- 4.14s 2025-04-05 12:21:33.398164 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.89s 2025-04-05 12:21:33.398845 | orchestrator | Print LVM report data --------------------------------------------------- 1.72s 2025-04-05 12:21:33.399545 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.64s 2025-04-05 12:21:33.400072 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.63s 2025-04-05 12:21:33.400714 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.63s 2025-04-05 12:21:33.401266 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.59s 2025-04-05 12:21:33.401965 | orchestrator | Add known links to the list of available block devices ------------------ 1.48s 2025-04-05 12:21:33.402619 | orchestrator | Add known partitions to the list of available block devices ------------- 1.34s 2025-04-05 12:21:33.403103 | orchestrator | Get initial list of available block devices ----------------------------- 1.10s 2025-04-05 12:21:33.403616 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.83s 2025-04-05 12:21:33.404159 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.79s 2025-04-05 12:21:33.404999 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.79s 2025-04-05 12:21:33.405451 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.75s 2025-04-05 12:21:33.405996 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.67s 2025-04-05 12:21:33.407266 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.67s 2025-04-05 12:21:33.407557 | orchestrator | Create DB+WAL VGs ------------------------------------------------------- 0.65s 2025-04-05 12:21:33.408098 | orchestrator | Print 'Create DB+WAL VGs' ----------------------------------------------- 0.63s 2025-04-05 12:21:33.408427 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-04-05 12:21:35.409895 | orchestrator | 2025-04-05 12:21:35 | INFO  | Task a27e5017-cb56-45d6-9983-f9702df546d2 (facts) was prepared for execution. 2025-04-05 12:21:39.192000 | orchestrator | 2025-04-05 12:21:35 | INFO  | It takes a moment until task a27e5017-cb56-45d6-9983-f9702df546d2 (facts) has been started and output is visible here. 2025-04-05 12:21:39.192126 | orchestrator | 2025-04-05 12:21:39.194112 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-04-05 12:21:39.194142 | orchestrator | 2025-04-05 12:21:39.194164 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-05 12:21:39.195137 | orchestrator | Saturday 05 April 2025 12:21:39 +0000 (0:00:00.231) 0:00:00.231 ******** 2025-04-05 12:21:40.210549 | orchestrator | ok: [testbed-manager] 2025-04-05 12:21:40.213756 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:21:40.215005 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:21:40.215043 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:21:40.216478 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:21:40.217207 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:21:40.219018 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:21:40.358405 | orchestrator | 2025-04-05 12:21:40.358438 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-05 12:21:40.358458 | orchestrator | Saturday 05 April 2025 12:21:40 +0000 (0:00:01.020) 0:00:01.251 ******** 2025-04-05 12:21:40.358479 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:21:40.427702 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:21:40.498506 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:21:40.565319 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:21:40.631725 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:21:41.241127 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:41.243981 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:41.244673 | orchestrator | 2025-04-05 12:21:41.244704 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-05 12:21:41.245559 | orchestrator | 2025-04-05 12:21:41.245862 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-05 12:21:41.246832 | orchestrator | Saturday 05 April 2025 12:21:41 +0000 (0:00:01.034) 0:00:02.285 ******** 2025-04-05 12:21:46.596386 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:21:46.597293 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:21:46.599843 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:21:46.601036 | orchestrator | ok: [testbed-manager] 2025-04-05 12:21:46.601065 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:21:46.601084 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:21:46.601923 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:21:46.602675 | orchestrator | 2025-04-05 12:21:46.603680 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-05 12:21:46.604494 | orchestrator | 2025-04-05 12:21:46.605199 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-05 12:21:46.605881 | orchestrator | Saturday 05 April 2025 12:21:46 +0000 (0:00:05.355) 0:00:07.640 ******** 2025-04-05 12:21:46.736488 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:21:46.814756 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:21:46.882998 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:21:46.950084 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:21:47.016682 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:21:47.057463 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:21:47.058564 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:21:47.059534 | orchestrator | 2025-04-05 12:21:47.060387 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:21:47.060621 | orchestrator | 2025-04-05 12:21:47 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:21:47.060729 | orchestrator | 2025-04-05 12:21:47 | INFO  | Please wait and do not abort execution. 2025-04-05 12:21:47.061707 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:21:47.062572 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:21:47.063205 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:21:47.063700 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:21:47.064200 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:21:47.064754 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:21:47.065215 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:21:47.065686 | orchestrator | 2025-04-05 12:21:47.066189 | orchestrator | 2025-04-05 12:21:47.066676 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:21:47.067149 | orchestrator | Saturday 05 April 2025 12:21:47 +0000 (0:00:00.461) 0:00:08.102 ******** 2025-04-05 12:21:47.067750 | orchestrator | =============================================================================== 2025-04-05 12:21:47.068097 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.36s 2025-04-05 12:21:47.068556 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.03s 2025-04-05 12:21:47.068987 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.02s 2025-04-05 12:21:47.069347 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2025-04-05 12:21:47.417678 | orchestrator | 2025-04-05 12:21:47.420676 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Apr 5 12:21:47 UTC 2025 2025-04-05 12:21:48.783631 | orchestrator | 2025-04-05 12:21:48.783754 | orchestrator | 2025-04-05 12:21:48 | INFO  | Collection nutshell is prepared for execution 2025-04-05 12:21:48.785583 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [0] - dotfiles 2025-04-05 12:21:48.785624 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [0] - homer 2025-04-05 12:21:48.787029 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [0] - netdata 2025-04-05 12:21:48.787064 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [0] - openstackclient 2025-04-05 12:21:48.787080 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [0] - phpmyadmin 2025-04-05 12:21:48.787130 | orchestrator | 2025-04-05 12:21:48 | INFO  | A [0] - common 2025-04-05 12:21:48.787154 | orchestrator | 2025-04-05 12:21:48 | INFO  | A [1] -- loadbalancer 2025-04-05 12:21:48.787270 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [2] --- opensearch 2025-04-05 12:21:48.787306 | orchestrator | 2025-04-05 12:21:48 | INFO  | A [2] --- mariadb-ng 2025-04-05 12:21:48.787339 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [3] ---- horizon 2025-04-05 12:21:48.787433 | orchestrator | 2025-04-05 12:21:48 | INFO  | A [3] ---- keystone 2025-04-05 12:21:48.787462 | orchestrator | 2025-04-05 12:21:48 | INFO  | A [4] ----- neutron 2025-04-05 12:21:48.787493 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [5] ------ wait-for-nova 2025-04-05 12:21:48.787625 | orchestrator | 2025-04-05 12:21:48 | INFO  | A [5] ------ octavia 2025-04-05 12:21:48.787663 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [4] ----- barbican 2025-04-05 12:21:48.788061 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [4] ----- designate 2025-04-05 12:21:48.788421 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [4] ----- ironic 2025-04-05 12:21:48.788451 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [4] ----- placement 2025-04-05 12:21:48.788495 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [4] ----- magnum 2025-04-05 12:21:48.788516 | orchestrator | 2025-04-05 12:21:48 | INFO  | A [1] -- openvswitch 2025-04-05 12:21:48.788575 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [2] --- ovn 2025-04-05 12:21:48.788595 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [1] -- memcached 2025-04-05 12:21:48.788681 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [1] -- redis 2025-04-05 12:21:48.788702 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [1] -- rabbitmq-ng 2025-04-05 12:21:48.790272 | orchestrator | 2025-04-05 12:21:48 | INFO  | A [0] - kubernetes 2025-04-05 12:21:48.790314 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [1] -- kubeconfig 2025-04-05 12:21:48.790424 | orchestrator | 2025-04-05 12:21:48 | INFO  | A [1] -- copy-kubeconfig 2025-04-05 12:21:48.790462 | orchestrator | 2025-04-05 12:21:48 | INFO  | A [0] - ceph 2025-04-05 12:21:48.791856 | orchestrator | 2025-04-05 12:21:48 | INFO  | A [1] -- ceph-pools 2025-04-05 12:21:48.791954 | orchestrator | 2025-04-05 12:21:48 | INFO  | A [2] --- copy-ceph-keys 2025-04-05 12:21:48.791977 | orchestrator | 2025-04-05 12:21:48 | INFO  | A [3] ---- cephclient 2025-04-05 12:21:48.792108 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-04-05 12:21:48.792142 | orchestrator | 2025-04-05 12:21:48 | INFO  | A [4] ----- wait-for-keystone 2025-04-05 12:21:48.792157 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [5] ------ kolla-ceph-rgw 2025-04-05 12:21:48.792175 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [5] ------ glance 2025-04-05 12:21:48.792247 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [5] ------ cinder 2025-04-05 12:21:48.792269 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [5] ------ nova 2025-04-05 12:21:48.792425 | orchestrator | 2025-04-05 12:21:48 | INFO  | A [4] ----- prometheus 2025-04-05 12:21:48.944915 | orchestrator | 2025-04-05 12:21:48 | INFO  | D [5] ------ grafana 2025-04-05 12:21:48.945016 | orchestrator | 2025-04-05 12:21:48 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-04-05 12:21:51.038166 | orchestrator | 2025-04-05 12:21:48 | INFO  | Tasks are running in the background 2025-04-05 12:21:51.038325 | orchestrator | 2025-04-05 12:21:51 | INFO  | No task IDs specified, wait for all currently running tasks 2025-04-05 12:21:53.125496 | orchestrator | 2025-04-05 12:21:53 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:21:53.125654 | orchestrator | 2025-04-05 12:21:53 | INFO  | Task aa2ea272-0852-4d40-8374-b09a75e762b3 is in state STARTED 2025-04-05 12:21:53.125754 | orchestrator | 2025-04-05 12:21:53 | INFO  | Task 8a17d569-dfa5-4c04-b7b6-e1f464ebd698 is in state STARTED 2025-04-05 12:21:53.126327 | orchestrator | 2025-04-05 12:21:53 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:21:53.127845 | orchestrator | 2025-04-05 12:21:53 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:21:53.128230 | orchestrator | 2025-04-05 12:21:53 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state STARTED 2025-04-05 12:21:53.128264 | orchestrator | 2025-04-05 12:21:53 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:21:56.168444 | orchestrator | 2025-04-05 12:21:53 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:21:56.168567 | orchestrator | 2025-04-05 12:21:56 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:21:56.168640 | orchestrator | 2025-04-05 12:21:56 | INFO  | Task aa2ea272-0852-4d40-8374-b09a75e762b3 is in state STARTED 2025-04-05 12:21:56.168868 | orchestrator | 2025-04-05 12:21:56 | INFO  | Task 8a17d569-dfa5-4c04-b7b6-e1f464ebd698 is in state STARTED 2025-04-05 12:21:56.169322 | orchestrator | 2025-04-05 12:21:56 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:21:56.169837 | orchestrator | 2025-04-05 12:21:56 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:21:56.174681 | orchestrator | 2025-04-05 12:21:56 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state STARTED 2025-04-05 12:21:56.177264 | orchestrator | 2025-04-05 12:21:56 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:21:56.179463 | orchestrator | 2025-04-05 12:21:56 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:21:59.207292 | orchestrator | 2025-04-05 12:21:59 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:21:59.207494 | orchestrator | 2025-04-05 12:21:59 | INFO  | Task aa2ea272-0852-4d40-8374-b09a75e762b3 is in state STARTED 2025-04-05 12:21:59.209937 | orchestrator | 2025-04-05 12:21:59 | INFO  | Task 8a17d569-dfa5-4c04-b7b6-e1f464ebd698 is in state STARTED 2025-04-05 12:21:59.210265 | orchestrator | 2025-04-05 12:21:59 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:21:59.210738 | orchestrator | 2025-04-05 12:21:59 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:21:59.211257 | orchestrator | 2025-04-05 12:21:59 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state STARTED 2025-04-05 12:21:59.214646 | orchestrator | 2025-04-05 12:21:59 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:02.264974 | orchestrator | 2025-04-05 12:21:59 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:22:02.265513 | orchestrator | 2025-04-05 12:22:02 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:22:02.265620 | orchestrator | 2025-04-05 12:22:02 | INFO  | Task aa2ea272-0852-4d40-8374-b09a75e762b3 is in state STARTED 2025-04-05 12:22:02.265647 | orchestrator | 2025-04-05 12:22:02 | INFO  | Task 8a17d569-dfa5-4c04-b7b6-e1f464ebd698 is in state STARTED 2025-04-05 12:22:02.266337 | orchestrator | 2025-04-05 12:22:02 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:22:02.266481 | orchestrator | 2025-04-05 12:22:02 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:22:02.266505 | orchestrator | 2025-04-05 12:22:02 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state STARTED 2025-04-05 12:22:02.270773 | orchestrator | 2025-04-05 12:22:02 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:05.305176 | orchestrator | 2025-04-05 12:22:02 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:22:05.305300 | orchestrator | 2025-04-05 12:22:05 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:22:05.308903 | orchestrator | 2025-04-05 12:22:05 | INFO  | Task aa2ea272-0852-4d40-8374-b09a75e762b3 is in state STARTED 2025-04-05 12:22:05.309268 | orchestrator | 2025-04-05 12:22:05 | INFO  | Task 8a17d569-dfa5-4c04-b7b6-e1f464ebd698 is in state STARTED 2025-04-05 12:22:05.309985 | orchestrator | 2025-04-05 12:22:05 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:22:05.310382 | orchestrator | 2025-04-05 12:22:05 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:22:05.311170 | orchestrator | 2025-04-05 12:22:05 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state STARTED 2025-04-05 12:22:05.311629 | orchestrator | 2025-04-05 12:22:05 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:08.373975 | orchestrator | 2025-04-05 12:22:05 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:22:08.374163 | orchestrator | 2025-04-05 12:22:08 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:22:08.375871 | orchestrator | 2025-04-05 12:22:08 | INFO  | Task aa2ea272-0852-4d40-8374-b09a75e762b3 is in state STARTED 2025-04-05 12:22:08.375901 | orchestrator | 2025-04-05 12:22:08 | INFO  | Task 8a17d569-dfa5-4c04-b7b6-e1f464ebd698 is in state STARTED 2025-04-05 12:22:08.375920 | orchestrator | 2025-04-05 12:22:08 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:22:08.378099 | orchestrator | 2025-04-05 12:22:08 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:22:08.378161 | orchestrator | 2025-04-05 12:22:08 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state STARTED 2025-04-05 12:22:11.441886 | orchestrator | 2025-04-05 12:22:08 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:11.441969 | orchestrator | 2025-04-05 12:22:08 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:22:11.441987 | orchestrator | 2025-04-05 12:22:11 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:22:11.449225 | orchestrator | 2025-04-05 12:22:11 | INFO  | Task aa2ea272-0852-4d40-8374-b09a75e762b3 is in state STARTED 2025-04-05 12:22:11.450720 | orchestrator | 2025-04-05 12:22:11 | INFO  | Task 8a17d569-dfa5-4c04-b7b6-e1f464ebd698 is in state STARTED 2025-04-05 12:22:11.451860 | orchestrator | 2025-04-05 12:22:11 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:22:11.453012 | orchestrator | 2025-04-05 12:22:11 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:22:11.453884 | orchestrator | 2025-04-05 12:22:11 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state STARTED 2025-04-05 12:22:11.455062 | orchestrator | 2025-04-05 12:22:11 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:14.520694 | orchestrator | 2025-04-05 12:22:11 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:22:14.520876 | orchestrator | 2025-04-05 12:22:14 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:22:14.522193 | orchestrator | 2025-04-05 12:22:14 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:22:14.522256 | orchestrator | 2025-04-05 12:22:14 | INFO  | Task aa2ea272-0852-4d40-8374-b09a75e762b3 is in state SUCCESS 2025-04-05 12:22:14.522428 | orchestrator | 2025-04-05 12:22:14.522449 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-04-05 12:22:14.522464 | orchestrator | 2025-04-05 12:22:14.522478 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-04-05 12:22:14.522492 | orchestrator | Saturday 05 April 2025 12:21:59 +0000 (0:00:00.277) 0:00:00.277 ******** 2025-04-05 12:22:14.522506 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:22:14.522521 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:22:14.522536 | orchestrator | changed: [testbed-manager] 2025-04-05 12:22:14.522549 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:22:14.522563 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:22:14.522577 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:22:14.522591 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:22:14.522605 | orchestrator | 2025-04-05 12:22:14.522619 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-04-05 12:22:14.522633 | orchestrator | Saturday 05 April 2025 12:22:03 +0000 (0:00:03.920) 0:00:04.198 ******** 2025-04-05 12:22:14.522647 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-04-05 12:22:14.522661 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-04-05 12:22:14.522682 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-04-05 12:22:14.522696 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-04-05 12:22:14.522710 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-04-05 12:22:14.522724 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-04-05 12:22:14.522737 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-04-05 12:22:14.522751 | orchestrator | 2025-04-05 12:22:14.522765 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-04-05 12:22:14.522807 | orchestrator | Saturday 05 April 2025 12:22:05 +0000 (0:00:02.831) 0:00:07.029 ******** 2025-04-05 12:22:14.522826 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-05 12:22:03.938099', 'end': '2025-04-05 12:22:03.941747', 'delta': '0:00:00.003648', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-05 12:22:14.522872 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-05 12:22:04.055088', 'end': '2025-04-05 12:22:04.063915', 'delta': '0:00:00.008827', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-05 12:22:14.522888 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-05 12:22:04.210697', 'end': '2025-04-05 12:22:04.219244', 'delta': '0:00:00.008547', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-05 12:22:14.522930 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-05 12:22:04.565593', 'end': '2025-04-05 12:22:05.573743', 'delta': '0:00:01.008150', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-05 12:22:14.522945 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-05 12:22:05.006176', 'end': '2025-04-05 12:22:05.013441', 'delta': '0:00:00.007265', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-05 12:22:14.522968 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-05 12:22:05.474238', 'end': '2025-04-05 12:22:05.480616', 'delta': '0:00:00.006378', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-05 12:22:14.522988 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-05 12:22:05.581005', 'end': '2025-04-05 12:22:05.587757', 'delta': '0:00:00.006752', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-05 12:22:14.523002 | orchestrator | 2025-04-05 12:22:14.523016 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-04-05 12:22:14.523030 | orchestrator | Saturday 05 April 2025 12:22:08 +0000 (0:00:02.961) 0:00:09.990 ******** 2025-04-05 12:22:14.523045 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-04-05 12:22:14.523062 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-04-05 12:22:14.523077 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-04-05 12:22:14.523092 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-04-05 12:22:14.523108 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-04-05 12:22:14.523123 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-04-05 12:22:14.523138 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-04-05 12:22:14.523154 | orchestrator | 2025-04-05 12:22:14.523170 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-04-05 12:22:14.523185 | orchestrator | Saturday 05 April 2025 12:22:10 +0000 (0:00:01.109) 0:00:11.100 ******** 2025-04-05 12:22:14.523200 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-04-05 12:22:14.523216 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-04-05 12:22:14.523231 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-04-05 12:22:14.523246 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-04-05 12:22:14.523261 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-04-05 12:22:14.523277 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-04-05 12:22:14.523292 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-04-05 12:22:14.523308 | orchestrator | 2025-04-05 12:22:14.523323 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:22:14.523345 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:22:14.526545 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:22:14.526589 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:22:14.526627 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:22:14.526642 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:22:14.526657 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:22:14.526670 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:22:14.526684 | orchestrator | 2025-04-05 12:22:14.526698 | orchestrator | 2025-04-05 12:22:14.526712 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:22:14.526727 | orchestrator | Saturday 05 April 2025 12:22:12 +0000 (0:00:02.416) 0:00:13.517 ******** 2025-04-05 12:22:14.526741 | orchestrator | =============================================================================== 2025-04-05 12:22:14.526755 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.92s 2025-04-05 12:22:14.526768 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.96s 2025-04-05 12:22:14.526807 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.83s 2025-04-05 12:22:14.526821 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.42s 2025-04-05 12:22:14.526835 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.11s 2025-04-05 12:22:14.526856 | orchestrator | 2025-04-05 12:22:14 | INFO  | Task 8a17d569-dfa5-4c04-b7b6-e1f464ebd698 is in state STARTED 2025-04-05 12:22:14.526920 | orchestrator | 2025-04-05 12:22:14 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:22:14.526940 | orchestrator | 2025-04-05 12:22:14 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:22:14.532008 | orchestrator | 2025-04-05 12:22:14 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state STARTED 2025-04-05 12:22:14.533579 | orchestrator | 2025-04-05 12:22:14 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:14.533615 | orchestrator | 2025-04-05 12:22:14 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:22:17.590899 | orchestrator | 2025-04-05 12:22:17 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:22:17.592277 | orchestrator | 2025-04-05 12:22:17 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:22:17.597018 | orchestrator | 2025-04-05 12:22:17 | INFO  | Task 8a17d569-dfa5-4c04-b7b6-e1f464ebd698 is in state STARTED 2025-04-05 12:22:17.597504 | orchestrator | 2025-04-05 12:22:17 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:22:17.608334 | orchestrator | 2025-04-05 12:22:17 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:22:17.613483 | orchestrator | 2025-04-05 12:22:17 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state STARTED 2025-04-05 12:22:17.613551 | orchestrator | 2025-04-05 12:22:17 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:20.651742 | orchestrator | 2025-04-05 12:22:17 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:22:20.651935 | orchestrator | 2025-04-05 12:22:20 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:22:20.655286 | orchestrator | 2025-04-05 12:22:20 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:22:20.655357 | orchestrator | 2025-04-05 12:22:20 | INFO  | Task 8a17d569-dfa5-4c04-b7b6-e1f464ebd698 is in state STARTED 2025-04-05 12:22:20.655431 | orchestrator | 2025-04-05 12:22:20 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:22:20.655469 | orchestrator | 2025-04-05 12:22:20 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:22:20.655881 | orchestrator | 2025-04-05 12:22:20 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state STARTED 2025-04-05 12:22:20.655938 | orchestrator | 2025-04-05 12:22:20 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:23.708858 | orchestrator | 2025-04-05 12:22:20 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:22:23.708986 | orchestrator | 2025-04-05 12:22:23 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:22:23.709488 | orchestrator | 2025-04-05 12:22:23 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:22:23.710931 | orchestrator | 2025-04-05 12:22:23 | INFO  | Task 8a17d569-dfa5-4c04-b7b6-e1f464ebd698 is in state STARTED 2025-04-05 12:22:23.712655 | orchestrator | 2025-04-05 12:22:23 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:22:23.713344 | orchestrator | 2025-04-05 12:22:23 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:22:23.714742 | orchestrator | 2025-04-05 12:22:23 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state STARTED 2025-04-05 12:22:23.719849 | orchestrator | 2025-04-05 12:22:23 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:26.779734 | orchestrator | 2025-04-05 12:22:23 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:22:26.779863 | orchestrator | 2025-04-05 12:22:26 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:22:26.779922 | orchestrator | 2025-04-05 12:22:26 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:22:26.782283 | orchestrator | 2025-04-05 12:22:26 | INFO  | Task 8a17d569-dfa5-4c04-b7b6-e1f464ebd698 is in state STARTED 2025-04-05 12:22:26.783961 | orchestrator | 2025-04-05 12:22:26 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:22:26.784178 | orchestrator | 2025-04-05 12:22:26 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:22:26.785719 | orchestrator | 2025-04-05 12:22:26 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state STARTED 2025-04-05 12:22:26.787606 | orchestrator | 2025-04-05 12:22:26 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:29.843710 | orchestrator | 2025-04-05 12:22:26 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:22:29.843873 | orchestrator | 2025-04-05 12:22:29 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:22:29.845264 | orchestrator | 2025-04-05 12:22:29 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:22:29.845857 | orchestrator | 2025-04-05 12:22:29 | INFO  | Task 8a17d569-dfa5-4c04-b7b6-e1f464ebd698 is in state STARTED 2025-04-05 12:22:29.845899 | orchestrator | 2025-04-05 12:22:29 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:22:29.846598 | orchestrator | 2025-04-05 12:22:29 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:22:29.847474 | orchestrator | 2025-04-05 12:22:29 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state STARTED 2025-04-05 12:22:29.847972 | orchestrator | 2025-04-05 12:22:29 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:29.848099 | orchestrator | 2025-04-05 12:22:29 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:22:32.894721 | orchestrator | 2025-04-05 12:22:32 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:22:32.897246 | orchestrator | 2025-04-05 12:22:32 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:22:32.897774 | orchestrator | 2025-04-05 12:22:32 | INFO  | Task 8a17d569-dfa5-4c04-b7b6-e1f464ebd698 is in state SUCCESS 2025-04-05 12:22:32.897825 | orchestrator | 2025-04-05 12:22:32 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:22:32.897846 | orchestrator | 2025-04-05 12:22:32 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:22:32.901628 | orchestrator | 2025-04-05 12:22:32 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state STARTED 2025-04-05 12:22:32.903944 | orchestrator | 2025-04-05 12:22:32 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:35.939602 | orchestrator | 2025-04-05 12:22:32 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:22:35.939723 | orchestrator | 2025-04-05 12:22:35 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:22:35.939836 | orchestrator | 2025-04-05 12:22:35 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:22:35.944527 | orchestrator | 2025-04-05 12:22:35 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:22:35.945451 | orchestrator | 2025-04-05 12:22:35 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:22:38.976380 | orchestrator | 2025-04-05 12:22:35 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state STARTED 2025-04-05 12:22:38.976493 | orchestrator | 2025-04-05 12:22:35 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:38.976511 | orchestrator | 2025-04-05 12:22:35 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:22:38.976543 | orchestrator | 2025-04-05 12:22:38 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:22:38.981293 | orchestrator | 2025-04-05 12:22:38 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:22:38.986459 | orchestrator | 2025-04-05 12:22:38 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:22:38.988020 | orchestrator | 2025-04-05 12:22:38 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:22:38.989463 | orchestrator | 2025-04-05 12:22:38 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state STARTED 2025-04-05 12:22:38.990425 | orchestrator | 2025-04-05 12:22:38 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:42.031312 | orchestrator | 2025-04-05 12:22:38 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:22:42.031424 | orchestrator | 2025-04-05 12:22:42 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:22:42.032484 | orchestrator | 2025-04-05 12:22:42 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:22:42.033680 | orchestrator | 2025-04-05 12:22:42 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:22:42.038550 | orchestrator | 2025-04-05 12:22:42 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:22:42.039628 | orchestrator | 2025-04-05 12:22:42 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state STARTED 2025-04-05 12:22:42.039680 | orchestrator | 2025-04-05 12:22:42 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:42.043345 | orchestrator | 2025-04-05 12:22:42 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:22:45.085461 | orchestrator | 2025-04-05 12:22:45 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:22:45.089612 | orchestrator | 2025-04-05 12:22:45 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:22:45.093403 | orchestrator | 2025-04-05 12:22:45 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:22:45.093436 | orchestrator | 2025-04-05 12:22:45 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:22:45.093458 | orchestrator | 2025-04-05 12:22:45 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state STARTED 2025-04-05 12:22:48.154516 | orchestrator | 2025-04-05 12:22:45 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:48.154630 | orchestrator | 2025-04-05 12:22:45 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:22:48.154668 | orchestrator | 2025-04-05 12:22:48 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:22:48.155429 | orchestrator | 2025-04-05 12:22:48 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:22:48.157127 | orchestrator | 2025-04-05 12:22:48 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:22:48.159697 | orchestrator | 2025-04-05 12:22:48 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:22:51.206128 | orchestrator | 2025-04-05 12:22:48 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state STARTED 2025-04-05 12:22:51.206261 | orchestrator | 2025-04-05 12:22:48 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:51.206281 | orchestrator | 2025-04-05 12:22:48 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:22:51.206313 | orchestrator | 2025-04-05 12:22:51 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:22:51.209665 | orchestrator | 2025-04-05 12:22:51 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state STARTED 2025-04-05 12:22:51.211315 | orchestrator | 2025-04-05 12:22:51 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:22:51.211398 | orchestrator | 2025-04-05 12:22:51 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:22:51.211421 | orchestrator | 2025-04-05 12:22:51 | INFO  | Task 1d7c3468-443d-4dc8-9093-291908f40904 is in state SUCCESS 2025-04-05 12:22:51.214092 | orchestrator | 2025-04-05 12:22:51 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:51.214175 | orchestrator | 2025-04-05 12:22:51 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:22:54.249912 | orchestrator | 2025-04-05 12:22:54 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:22:54.250157 | orchestrator | 2025-04-05 12:22:54 | INFO  | Task afb14efc-256e-49c3-9c7d-478754b278de is in state SUCCESS 2025-04-05 12:22:54.250861 | orchestrator | 2025-04-05 12:22:54.250894 | orchestrator | 2025-04-05 12:22:54.250902 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-04-05 12:22:54.250910 | orchestrator | 2025-04-05 12:22:54.250917 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-04-05 12:22:54.250926 | orchestrator | Saturday 05 April 2025 12:21:59 +0000 (0:00:00.243) 0:00:00.243 ******** 2025-04-05 12:22:54.250949 | orchestrator | ok: [testbed-manager] => { 2025-04-05 12:22:54.250959 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-04-05 12:22:54.250968 | orchestrator | } 2025-04-05 12:22:54.250975 | orchestrator | 2025-04-05 12:22:54.250982 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-04-05 12:22:54.250990 | orchestrator | Saturday 05 April 2025 12:21:59 +0000 (0:00:00.367) 0:00:00.611 ******** 2025-04-05 12:22:54.250997 | orchestrator | ok: [testbed-manager] 2025-04-05 12:22:54.251005 | orchestrator | 2025-04-05 12:22:54.251012 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-04-05 12:22:54.251019 | orchestrator | Saturday 05 April 2025 12:22:00 +0000 (0:00:01.317) 0:00:01.928 ******** 2025-04-05 12:22:54.251026 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-04-05 12:22:54.251033 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-04-05 12:22:54.251040 | orchestrator | 2025-04-05 12:22:54.251047 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-04-05 12:22:54.251054 | orchestrator | Saturday 05 April 2025 12:22:01 +0000 (0:00:00.984) 0:00:02.913 ******** 2025-04-05 12:22:54.251061 | orchestrator | changed: [testbed-manager] 2025-04-05 12:22:54.251068 | orchestrator | 2025-04-05 12:22:54.251075 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-04-05 12:22:54.251082 | orchestrator | Saturday 05 April 2025 12:22:03 +0000 (0:00:01.845) 0:00:04.758 ******** 2025-04-05 12:22:54.251090 | orchestrator | changed: [testbed-manager] 2025-04-05 12:22:54.251097 | orchestrator | 2025-04-05 12:22:54.251104 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-04-05 12:22:54.251111 | orchestrator | Saturday 05 April 2025 12:22:05 +0000 (0:00:01.628) 0:00:06.386 ******** 2025-04-05 12:22:54.251118 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-04-05 12:22:54.251125 | orchestrator | ok: [testbed-manager] 2025-04-05 12:22:54.251132 | orchestrator | 2025-04-05 12:22:54.251143 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-04-05 12:22:54.251150 | orchestrator | Saturday 05 April 2025 12:22:28 +0000 (0:00:23.511) 0:00:29.898 ******** 2025-04-05 12:22:54.251157 | orchestrator | changed: [testbed-manager] 2025-04-05 12:22:54.251164 | orchestrator | 2025-04-05 12:22:54.251171 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:22:54.251177 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:22:54.251186 | orchestrator | 2025-04-05 12:22:54.251193 | orchestrator | 2025-04-05 12:22:54.251199 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:22:54.251206 | orchestrator | Saturday 05 April 2025 12:22:30 +0000 (0:00:01.909) 0:00:31.808 ******** 2025-04-05 12:22:54.251213 | orchestrator | =============================================================================== 2025-04-05 12:22:54.251220 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 23.51s 2025-04-05 12:22:54.251226 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.91s 2025-04-05 12:22:54.251234 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.85s 2025-04-05 12:22:54.251240 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.63s 2025-04-05 12:22:54.251247 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.32s 2025-04-05 12:22:54.251254 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.98s 2025-04-05 12:22:54.251261 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.37s 2025-04-05 12:22:54.251268 | orchestrator | 2025-04-05 12:22:54.251274 | orchestrator | 2025-04-05 12:22:54.251281 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-04-05 12:22:54.251293 | orchestrator | 2025-04-05 12:22:54.251300 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-04-05 12:22:54.251307 | orchestrator | Saturday 05 April 2025 12:21:58 +0000 (0:00:00.354) 0:00:00.354 ******** 2025-04-05 12:22:54.251314 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-04-05 12:22:54.251322 | orchestrator | 2025-04-05 12:22:54.251329 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-04-05 12:22:54.251336 | orchestrator | Saturday 05 April 2025 12:21:59 +0000 (0:00:00.361) 0:00:00.716 ******** 2025-04-05 12:22:54.251343 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-04-05 12:22:54.251350 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-04-05 12:22:54.251357 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-04-05 12:22:54.251364 | orchestrator | 2025-04-05 12:22:54.251371 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-04-05 12:22:54.251378 | orchestrator | Saturday 05 April 2025 12:22:01 +0000 (0:00:01.926) 0:00:02.642 ******** 2025-04-05 12:22:54.251385 | orchestrator | changed: [testbed-manager] 2025-04-05 12:22:54.251392 | orchestrator | 2025-04-05 12:22:54.251398 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-04-05 12:22:54.251405 | orchestrator | Saturday 05 April 2025 12:22:02 +0000 (0:00:01.404) 0:00:04.047 ******** 2025-04-05 12:22:54.251419 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-04-05 12:22:54.251427 | orchestrator | ok: [testbed-manager] 2025-04-05 12:22:54.251434 | orchestrator | 2025-04-05 12:22:54.251441 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-04-05 12:22:54.251448 | orchestrator | Saturday 05 April 2025 12:22:42 +0000 (0:00:40.349) 0:00:44.396 ******** 2025-04-05 12:22:54.251455 | orchestrator | changed: [testbed-manager] 2025-04-05 12:22:54.251462 | orchestrator | 2025-04-05 12:22:54.251469 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-04-05 12:22:54.251476 | orchestrator | Saturday 05 April 2025 12:22:43 +0000 (0:00:00.905) 0:00:45.302 ******** 2025-04-05 12:22:54.251483 | orchestrator | ok: [testbed-manager] 2025-04-05 12:22:54.251490 | orchestrator | 2025-04-05 12:22:54.251497 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-04-05 12:22:54.251505 | orchestrator | Saturday 05 April 2025 12:22:44 +0000 (0:00:00.989) 0:00:46.291 ******** 2025-04-05 12:22:54.251512 | orchestrator | changed: [testbed-manager] 2025-04-05 12:22:54.251519 | orchestrator | 2025-04-05 12:22:54.251526 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-04-05 12:22:54.251533 | orchestrator | Saturday 05 April 2025 12:22:47 +0000 (0:00:02.458) 0:00:48.750 ******** 2025-04-05 12:22:54.251540 | orchestrator | changed: [testbed-manager] 2025-04-05 12:22:54.251548 | orchestrator | 2025-04-05 12:22:54.251555 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-04-05 12:22:54.251565 | orchestrator | Saturday 05 April 2025 12:22:48 +0000 (0:00:00.965) 0:00:49.715 ******** 2025-04-05 12:22:54.251572 | orchestrator | changed: [testbed-manager] 2025-04-05 12:22:54.251579 | orchestrator | 2025-04-05 12:22:54.251586 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-04-05 12:22:54.251593 | orchestrator | Saturday 05 April 2025 12:22:48 +0000 (0:00:00.643) 0:00:50.359 ******** 2025-04-05 12:22:54.251601 | orchestrator | ok: [testbed-manager] 2025-04-05 12:22:54.251608 | orchestrator | 2025-04-05 12:22:54.251615 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:22:54.251622 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:22:54.251630 | orchestrator | 2025-04-05 12:22:54.251637 | orchestrator | 2025-04-05 12:22:54.251649 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:22:54.251656 | orchestrator | Saturday 05 April 2025 12:22:49 +0000 (0:00:00.309) 0:00:50.669 ******** 2025-04-05 12:22:54.251663 | orchestrator | =============================================================================== 2025-04-05 12:22:54.251669 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 40.35s 2025-04-05 12:22:54.251676 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.46s 2025-04-05 12:22:54.251684 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.93s 2025-04-05 12:22:54.251691 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.40s 2025-04-05 12:22:54.251698 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.99s 2025-04-05 12:22:54.251705 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.97s 2025-04-05 12:22:54.251712 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.91s 2025-04-05 12:22:54.251719 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.64s 2025-04-05 12:22:54.251727 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.36s 2025-04-05 12:22:54.251733 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.31s 2025-04-05 12:22:54.251741 | orchestrator | 2025-04-05 12:22:54.251748 | orchestrator | 2025-04-05 12:22:54.251755 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:22:54.251762 | orchestrator | 2025-04-05 12:22:54.251769 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:22:54.251795 | orchestrator | Saturday 05 April 2025 12:21:58 +0000 (0:00:00.241) 0:00:00.241 ******** 2025-04-05 12:22:54.251803 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-04-05 12:22:54.251811 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-04-05 12:22:54.251818 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-04-05 12:22:54.251825 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-04-05 12:22:54.251832 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-04-05 12:22:54.251840 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-04-05 12:22:54.251847 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-04-05 12:22:54.251853 | orchestrator | 2025-04-05 12:22:54.251860 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-04-05 12:22:54.251867 | orchestrator | 2025-04-05 12:22:54.251874 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-04-05 12:22:54.251881 | orchestrator | Saturday 05 April 2025 12:22:01 +0000 (0:00:02.383) 0:00:02.625 ******** 2025-04-05 12:22:54.251895 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:22:54.251904 | orchestrator | 2025-04-05 12:22:54.251911 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-04-05 12:22:54.251917 | orchestrator | Saturday 05 April 2025 12:22:03 +0000 (0:00:01.842) 0:00:04.468 ******** 2025-04-05 12:22:54.251924 | orchestrator | ok: [testbed-manager] 2025-04-05 12:22:54.251931 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:22:54.251939 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:22:54.251946 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:22:54.251953 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:22:54.251965 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:22:54.251972 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:22:54.251979 | orchestrator | 2025-04-05 12:22:54.251987 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-04-05 12:22:54.251995 | orchestrator | Saturday 05 April 2025 12:22:06 +0000 (0:00:03.086) 0:00:07.554 ******** 2025-04-05 12:22:54.252012 | orchestrator | ok: [testbed-manager] 2025-04-05 12:22:54.252020 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:22:54.252028 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:22:54.252035 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:22:54.252042 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:22:54.252049 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:22:54.252056 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:22:54.252066 | orchestrator | 2025-04-05 12:22:54.252074 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-04-05 12:22:54.252080 | orchestrator | Saturday 05 April 2025 12:22:09 +0000 (0:00:03.621) 0:00:11.176 ******** 2025-04-05 12:22:54.252087 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:22:54.252094 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:22:54.252101 | orchestrator | changed: [testbed-manager] 2025-04-05 12:22:54.252108 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:22:54.252115 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:22:54.252121 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:22:54.252128 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:22:54.252135 | orchestrator | 2025-04-05 12:22:54.252142 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-04-05 12:22:54.252149 | orchestrator | Saturday 05 April 2025 12:22:12 +0000 (0:00:02.464) 0:00:13.641 ******** 2025-04-05 12:22:54.252156 | orchestrator | changed: [testbed-manager] 2025-04-05 12:22:54.252163 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:22:54.252170 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:22:54.252176 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:22:54.252183 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:22:54.252190 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:22:54.252197 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:22:54.252204 | orchestrator | 2025-04-05 12:22:54.252213 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-04-05 12:22:54.252220 | orchestrator | Saturday 05 April 2025 12:22:20 +0000 (0:00:08.296) 0:00:21.938 ******** 2025-04-05 12:22:54.252227 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:22:54.252234 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:22:54.252241 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:22:54.252247 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:22:54.252254 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:22:54.252261 | orchestrator | changed: [testbed-manager] 2025-04-05 12:22:54.252268 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:22:54.252275 | orchestrator | 2025-04-05 12:22:54.252282 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-04-05 12:22:54.252289 | orchestrator | Saturday 05 April 2025 12:22:33 +0000 (0:00:12.847) 0:00:34.785 ******** 2025-04-05 12:22:54.252296 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:22:54.252306 | orchestrator | 2025-04-05 12:22:54.252313 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-04-05 12:22:54.252320 | orchestrator | Saturday 05 April 2025 12:22:34 +0000 (0:00:01.256) 0:00:36.042 ******** 2025-04-05 12:22:54.252327 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-04-05 12:22:54.252334 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-04-05 12:22:54.252342 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-04-05 12:22:54.252349 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-04-05 12:22:54.252356 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-04-05 12:22:54.252363 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-04-05 12:22:54.252370 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-04-05 12:22:54.252377 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-04-05 12:22:54.252384 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-04-05 12:22:54.252399 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-04-05 12:22:54.252405 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-04-05 12:22:54.252412 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-04-05 12:22:54.252419 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-04-05 12:22:54.252426 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-04-05 12:22:54.252433 | orchestrator | 2025-04-05 12:22:54.252440 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-04-05 12:22:54.252447 | orchestrator | Saturday 05 April 2025 12:22:39 +0000 (0:00:04.412) 0:00:40.455 ******** 2025-04-05 12:22:54.252454 | orchestrator | ok: [testbed-manager] 2025-04-05 12:22:54.252461 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:22:54.252468 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:22:54.252475 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:22:54.252482 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:22:54.252488 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:22:54.252495 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:22:54.252502 | orchestrator | 2025-04-05 12:22:54.252509 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-04-05 12:22:54.252516 | orchestrator | Saturday 05 April 2025 12:22:40 +0000 (0:00:01.025) 0:00:41.480 ******** 2025-04-05 12:22:54.252522 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:22:54.252529 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:22:54.252536 | orchestrator | changed: [testbed-manager] 2025-04-05 12:22:54.252543 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:22:54.252550 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:22:54.252557 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:22:54.252563 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:22:54.252570 | orchestrator | 2025-04-05 12:22:54.252577 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-04-05 12:22:54.252589 | orchestrator | Saturday 05 April 2025 12:22:41 +0000 (0:00:01.738) 0:00:43.219 ******** 2025-04-05 12:22:54.252597 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:22:54.252603 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:22:54.252610 | orchestrator | ok: [testbed-manager] 2025-04-05 12:22:54.252617 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:22:54.252624 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:22:54.252631 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:22:54.252638 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:22:54.252645 | orchestrator | 2025-04-05 12:22:54.252652 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-04-05 12:22:54.252659 | orchestrator | Saturday 05 April 2025 12:22:43 +0000 (0:00:01.718) 0:00:44.938 ******** 2025-04-05 12:22:54.252666 | orchestrator | ok: [testbed-manager] 2025-04-05 12:22:54.252673 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:22:54.252680 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:22:54.252687 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:22:54.252694 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:22:54.252701 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:22:54.252708 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:22:54.252715 | orchestrator | 2025-04-05 12:22:54.252722 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-04-05 12:22:54.252729 | orchestrator | Saturday 05 April 2025 12:22:45 +0000 (0:00:01.921) 0:00:46.859 ******** 2025-04-05 12:22:54.252736 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-04-05 12:22:54.252746 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:22:54.252753 | orchestrator | 2025-04-05 12:22:54.252760 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-04-05 12:22:54.252767 | orchestrator | Saturday 05 April 2025 12:22:47 +0000 (0:00:02.189) 0:00:49.048 ******** 2025-04-05 12:22:54.252832 | orchestrator | changed: [testbed-manager] 2025-04-05 12:22:54.252841 | orchestrator | 2025-04-05 12:22:54.252848 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-04-05 12:22:54.252855 | orchestrator | Saturday 05 April 2025 12:22:49 +0000 (0:00:02.211) 0:00:51.260 ******** 2025-04-05 12:22:54.252862 | orchestrator | changed: [testbed-manager] 2025-04-05 12:22:54.252869 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:22:54.252881 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:22:54.252889 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:22:54.252896 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:22:54.252903 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:22:54.252910 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:22:54.252949 | orchestrator | 2025-04-05 12:22:54.252956 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:22:54.252963 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:22:54.252971 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:22:54.252978 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:22:54.252988 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:22:54.252995 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:22:54.253002 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:22:54.253009 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:22:54.253016 | orchestrator | 2025-04-05 12:22:54.253022 | orchestrator | 2025-04-05 12:22:54.253030 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:22:54.253037 | orchestrator | Saturday 05 April 2025 12:22:52 +0000 (0:00:02.399) 0:00:53.659 ******** 2025-04-05 12:22:54.253043 | orchestrator | =============================================================================== 2025-04-05 12:22:54.253050 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 12.85s 2025-04-05 12:22:54.253057 | orchestrator | osism.services.netdata : Add repository --------------------------------- 8.30s 2025-04-05 12:22:54.253064 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.41s 2025-04-05 12:22:54.253071 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.62s 2025-04-05 12:22:54.253079 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.09s 2025-04-05 12:22:54.253087 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.46s 2025-04-05 12:22:54.253094 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.40s 2025-04-05 12:22:54.253102 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.38s 2025-04-05 12:22:54.253109 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.21s 2025-04-05 12:22:54.253118 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.19s 2025-04-05 12:22:54.253126 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.92s 2025-04-05 12:22:54.253138 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.84s 2025-04-05 12:22:54.253166 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.74s 2025-04-05 12:22:54.253173 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.72s 2025-04-05 12:22:54.253186 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.26s 2025-04-05 12:22:54.253193 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.03s 2025-04-05 12:22:54.253203 | orchestrator | 2025-04-05 12:22:54 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:22:54.253288 | orchestrator | 2025-04-05 12:22:54 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:22:54.254193 | orchestrator | 2025-04-05 12:22:54 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:54.254346 | orchestrator | 2025-04-05 12:22:54 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:22:57.280414 | orchestrator | 2025-04-05 12:22:57 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:22:57.280912 | orchestrator | 2025-04-05 12:22:57 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:22:57.280958 | orchestrator | 2025-04-05 12:22:57 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:22:57.281746 | orchestrator | 2025-04-05 12:22:57 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:22:57.281870 | orchestrator | 2025-04-05 12:22:57 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:00.322306 | orchestrator | 2025-04-05 12:23:00 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:23:00.323029 | orchestrator | 2025-04-05 12:23:00 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:00.324170 | orchestrator | 2025-04-05 12:23:00 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:00.325342 | orchestrator | 2025-04-05 12:23:00 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:23:03.366908 | orchestrator | 2025-04-05 12:23:00 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:03.367038 | orchestrator | 2025-04-05 12:23:03 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:23:03.368048 | orchestrator | 2025-04-05 12:23:03 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:03.373384 | orchestrator | 2025-04-05 12:23:03 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:03.374136 | orchestrator | 2025-04-05 12:23:03 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:23:06.423066 | orchestrator | 2025-04-05 12:23:03 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:06.423210 | orchestrator | 2025-04-05 12:23:06 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:23:06.430151 | orchestrator | 2025-04-05 12:23:06 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:06.430192 | orchestrator | 2025-04-05 12:23:06 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:06.430578 | orchestrator | 2025-04-05 12:23:06 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:23:09.481095 | orchestrator | 2025-04-05 12:23:06 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:09.481218 | orchestrator | 2025-04-05 12:23:09 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:23:09.482564 | orchestrator | 2025-04-05 12:23:09 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:09.483213 | orchestrator | 2025-04-05 12:23:09 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:09.483269 | orchestrator | 2025-04-05 12:23:09 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:23:12.517609 | orchestrator | 2025-04-05 12:23:09 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:12.517768 | orchestrator | 2025-04-05 12:23:12 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state STARTED 2025-04-05 12:23:12.517886 | orchestrator | 2025-04-05 12:23:12 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:12.517911 | orchestrator | 2025-04-05 12:23:12 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:12.518681 | orchestrator | 2025-04-05 12:23:12 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:23:12.518898 | orchestrator | 2025-04-05 12:23:12 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:15.556448 | orchestrator | 2025-04-05 12:23:15 | INFO  | Task dc1259ba-0f6d-422a-9867-c5a3559a1391 is in state SUCCESS 2025-04-05 12:23:15.556622 | orchestrator | 2025-04-05 12:23:15 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:15.557250 | orchestrator | 2025-04-05 12:23:15 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:15.557815 | orchestrator | 2025-04-05 12:23:15 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:23:15.557922 | orchestrator | 2025-04-05 12:23:15 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:18.599699 | orchestrator | 2025-04-05 12:23:18 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:18.601106 | orchestrator | 2025-04-05 12:23:18 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:18.603440 | orchestrator | 2025-04-05 12:23:18 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:23:21.631462 | orchestrator | 2025-04-05 12:23:18 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:21.631614 | orchestrator | 2025-04-05 12:23:21 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:21.631728 | orchestrator | 2025-04-05 12:23:21 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:21.634237 | orchestrator | 2025-04-05 12:23:21 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:23:24.667215 | orchestrator | 2025-04-05 12:23:21 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:24.667344 | orchestrator | 2025-04-05 12:23:24 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:24.668456 | orchestrator | 2025-04-05 12:23:24 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:24.668492 | orchestrator | 2025-04-05 12:23:24 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:23:27.697762 | orchestrator | 2025-04-05 12:23:24 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:27.697934 | orchestrator | 2025-04-05 12:23:27 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:27.698233 | orchestrator | 2025-04-05 12:23:27 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:27.699098 | orchestrator | 2025-04-05 12:23:27 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:23:27.699212 | orchestrator | 2025-04-05 12:23:27 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:30.741243 | orchestrator | 2025-04-05 12:23:30 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:30.741338 | orchestrator | 2025-04-05 12:23:30 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:30.741772 | orchestrator | 2025-04-05 12:23:30 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:23:30.741845 | orchestrator | 2025-04-05 12:23:30 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:33.773600 | orchestrator | 2025-04-05 12:23:33 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:33.775676 | orchestrator | 2025-04-05 12:23:33 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:33.776961 | orchestrator | 2025-04-05 12:23:33 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:23:36.812015 | orchestrator | 2025-04-05 12:23:33 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:36.812148 | orchestrator | 2025-04-05 12:23:36 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:36.813040 | orchestrator | 2025-04-05 12:23:36 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:36.813557 | orchestrator | 2025-04-05 12:23:36 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:23:39.856081 | orchestrator | 2025-04-05 12:23:36 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:39.856208 | orchestrator | 2025-04-05 12:23:39 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:39.857181 | orchestrator | 2025-04-05 12:23:39 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:39.858864 | orchestrator | 2025-04-05 12:23:39 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:23:39.858936 | orchestrator | 2025-04-05 12:23:39 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:42.895229 | orchestrator | 2025-04-05 12:23:42 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:42.897569 | orchestrator | 2025-04-05 12:23:42 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:42.898711 | orchestrator | 2025-04-05 12:23:42 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:23:45.949158 | orchestrator | 2025-04-05 12:23:42 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:45.949307 | orchestrator | 2025-04-05 12:23:45 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:45.956578 | orchestrator | 2025-04-05 12:23:45 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:49.011094 | orchestrator | 2025-04-05 12:23:45 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:23:49.011219 | orchestrator | 2025-04-05 12:23:45 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:49.011255 | orchestrator | 2025-04-05 12:23:49 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:49.011510 | orchestrator | 2025-04-05 12:23:49 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:49.011539 | orchestrator | 2025-04-05 12:23:49 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:23:49.011560 | orchestrator | 2025-04-05 12:23:49 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:52.056864 | orchestrator | 2025-04-05 12:23:52 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:52.057096 | orchestrator | 2025-04-05 12:23:52 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:52.060045 | orchestrator | 2025-04-05 12:23:52 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state STARTED 2025-04-05 12:23:55.108971 | orchestrator | 2025-04-05 12:23:52 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:55.109103 | orchestrator | 2025-04-05 12:23:55 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:55.110525 | orchestrator | 2025-04-05 12:23:55 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:55.114707 | orchestrator | 2025-04-05 12:23:55 | INFO  | Task 01eb29d8-4042-4baa-bf3e-554fad8a3038 is in state SUCCESS 2025-04-05 12:23:55.115080 | orchestrator | 2025-04-05 12:23:55.116613 | orchestrator | 2025-04-05 12:23:55.116662 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-04-05 12:23:55.116679 | orchestrator | 2025-04-05 12:23:55.116694 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-04-05 12:23:55.116710 | orchestrator | Saturday 05 April 2025 12:22:18 +0000 (0:00:00.215) 0:00:00.215 ******** 2025-04-05 12:23:55.116726 | orchestrator | ok: [testbed-manager] 2025-04-05 12:23:55.116743 | orchestrator | 2025-04-05 12:23:55.116758 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-04-05 12:23:55.116795 | orchestrator | Saturday 05 April 2025 12:22:19 +0000 (0:00:00.999) 0:00:01.215 ******** 2025-04-05 12:23:55.116814 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-04-05 12:23:55.116829 | orchestrator | 2025-04-05 12:23:55.116845 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-04-05 12:23:55.116860 | orchestrator | Saturday 05 April 2025 12:22:20 +0000 (0:00:00.558) 0:00:01.773 ******** 2025-04-05 12:23:55.116876 | orchestrator | changed: [testbed-manager] 2025-04-05 12:23:55.116892 | orchestrator | 2025-04-05 12:23:55.116907 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-04-05 12:23:55.116923 | orchestrator | Saturday 05 April 2025 12:22:21 +0000 (0:00:01.643) 0:00:03.417 ******** 2025-04-05 12:23:55.116938 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-04-05 12:23:55.116954 | orchestrator | ok: [testbed-manager] 2025-04-05 12:23:55.116969 | orchestrator | 2025-04-05 12:23:55.116985 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-04-05 12:23:55.117000 | orchestrator | Saturday 05 April 2025 12:23:08 +0000 (0:00:46.894) 0:00:50.311 ******** 2025-04-05 12:23:55.117015 | orchestrator | changed: [testbed-manager] 2025-04-05 12:23:55.117031 | orchestrator | 2025-04-05 12:23:55.117046 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:23:55.117061 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:23:55.117078 | orchestrator | 2025-04-05 12:23:55.117094 | orchestrator | 2025-04-05 12:23:55.117109 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:23:55.117124 | orchestrator | Saturday 05 April 2025 12:23:12 +0000 (0:00:03.287) 0:00:53.599 ******** 2025-04-05 12:23:55.117140 | orchestrator | =============================================================================== 2025-04-05 12:23:55.117155 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 46.89s 2025-04-05 12:23:55.117170 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.29s 2025-04-05 12:23:55.117186 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.64s 2025-04-05 12:23:55.117203 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.00s 2025-04-05 12:23:55.117220 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.56s 2025-04-05 12:23:55.117256 | orchestrator | 2025-04-05 12:23:55.117284 | orchestrator | 2025-04-05 12:23:55.117302 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-04-05 12:23:55.117319 | orchestrator | 2025-04-05 12:23:55.117335 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-04-05 12:23:55.117352 | orchestrator | Saturday 05 April 2025 12:21:52 +0000 (0:00:00.212) 0:00:00.212 ******** 2025-04-05 12:23:55.117369 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:23:55.117386 | orchestrator | 2025-04-05 12:23:55.117403 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-04-05 12:23:55.117420 | orchestrator | Saturday 05 April 2025 12:21:53 +0000 (0:00:01.335) 0:00:01.548 ******** 2025-04-05 12:23:55.117442 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-05 12:23:55.117460 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-05 12:23:55.117477 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-05 12:23:55.117494 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-05 12:23:55.117511 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-05 12:23:55.117527 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-05 12:23:55.117544 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-05 12:23:55.117559 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-05 12:23:55.117574 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-05 12:23:55.117589 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-05 12:23:55.117604 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-05 12:23:55.117620 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-05 12:23:55.117637 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-05 12:23:55.117652 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-05 12:23:55.117668 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-05 12:23:55.117683 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-05 12:23:55.117698 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-05 12:23:55.117713 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-05 12:23:55.117729 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-05 12:23:55.117744 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-05 12:23:55.117759 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-05 12:23:55.117788 | orchestrator | 2025-04-05 12:23:55.117804 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-04-05 12:23:55.117819 | orchestrator | Saturday 05 April 2025 12:21:57 +0000 (0:00:04.228) 0:00:05.777 ******** 2025-04-05 12:23:55.117835 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:23:55.117856 | orchestrator | 2025-04-05 12:23:55.117872 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-04-05 12:23:55.117887 | orchestrator | Saturday 05 April 2025 12:21:59 +0000 (0:00:01.387) 0:00:07.165 ******** 2025-04-05 12:23:55.117906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.117933 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.117963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.117979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.117995 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.118010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.118095 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.118112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.118136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.118173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.118190 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.118205 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.118221 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.118238 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.118260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.118281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.118296 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.118320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.118336 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.118354 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.118369 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.118383 | orchestrator | 2025-04-05 12:23:55.118398 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-04-05 12:23:55.118412 | orchestrator | Saturday 05 April 2025 12:22:03 +0000 (0:00:04.589) 0:00:11.755 ******** 2025-04-05 12:23:55.118426 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-05 12:23:55.118441 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.118467 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.118482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-05 12:23:55.118514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.118529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.118543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-05 12:23:55.118559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.118574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.118594 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:23:55.118609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-05 12:23:55.118624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.118638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.118653 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:23:55.118667 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:23:55.118695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-05 12:23:55.118710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.118724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.118739 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:23:55.118753 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:23:55.118767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-05 12:23:55.118854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.118871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.118884 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:23:55.118897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-05 12:23:55.118917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.118931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.118944 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:23:55.118956 | orchestrator | 2025-04-05 12:23:55.118969 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-04-05 12:23:55.118982 | orchestrator | Saturday 05 April 2025 12:22:05 +0000 (0:00:01.782) 0:00:13.537 ******** 2025-04-05 12:23:55.118994 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-05 12:23:55.119007 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.119029 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.119042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-05 12:23:55.119055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.119067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.119080 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:23:55.119092 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:23:55.119112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-05 12:23:55.119127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.119141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.119161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-05 12:23:55.119174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.119188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.119200 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:23:55.119213 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:23:55.119226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-05 12:23:55.119244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.119257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.119270 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:23:55.119283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-05 12:23:55.119307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.119320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.119333 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:23:55.119346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-05 12:23:55.119359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.119372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.119385 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:23:55.119397 | orchestrator | 2025-04-05 12:23:55.119409 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-04-05 12:23:55.119422 | orchestrator | Saturday 05 April 2025 12:22:08 +0000 (0:00:03.151) 0:00:16.688 ******** 2025-04-05 12:23:55.119434 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:23:55.119446 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:23:55.119459 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:23:55.119471 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:23:55.119483 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:23:55.119500 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:23:55.119513 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:23:55.119525 | orchestrator | 2025-04-05 12:23:55.119538 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-04-05 12:23:55.119550 | orchestrator | Saturday 05 April 2025 12:22:09 +0000 (0:00:00.816) 0:00:17.504 ******** 2025-04-05 12:23:55.119568 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:23:55.119580 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:23:55.119592 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:23:55.119605 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:23:55.119616 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:23:55.119629 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:23:55.119641 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:23:55.119653 | orchestrator | 2025-04-05 12:23:55.119666 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-04-05 12:23:55.119678 | orchestrator | Saturday 05 April 2025 12:22:10 +0000 (0:00:00.883) 0:00:18.388 ******** 2025-04-05 12:23:55.119691 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:23:55.119703 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:23:55.119715 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:23:55.119727 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:23:55.119739 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:23:55.119752 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:23:55.119764 | orchestrator | changed: [testbed-manager] 2025-04-05 12:23:55.119833 | orchestrator | 2025-04-05 12:23:55.119847 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-04-05 12:23:55.119860 | orchestrator | Saturday 05 April 2025 12:22:39 +0000 (0:00:29.553) 0:00:47.942 ******** 2025-04-05 12:23:55.119872 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:23:55.119884 | orchestrator | ok: [testbed-manager] 2025-04-05 12:23:55.119897 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:23:55.119909 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:23:55.119921 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:23:55.119934 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:23:55.119950 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:23:55.119963 | orchestrator | 2025-04-05 12:23:55.119976 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-04-05 12:23:55.119988 | orchestrator | Saturday 05 April 2025 12:22:42 +0000 (0:00:02.594) 0:00:50.536 ******** 2025-04-05 12:23:55.120001 | orchestrator | ok: [testbed-manager] 2025-04-05 12:23:55.120013 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:23:55.120026 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:23:55.120038 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:23:55.120050 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:23:55.120062 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:23:55.120075 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:23:55.120087 | orchestrator | 2025-04-05 12:23:55.120100 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-04-05 12:23:55.120112 | orchestrator | Saturday 05 April 2025 12:22:43 +0000 (0:00:01.101) 0:00:51.638 ******** 2025-04-05 12:23:55.120124 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:23:55.120137 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:23:55.120149 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:23:55.120162 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:23:55.120174 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:23:55.120187 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:23:55.120199 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:23:55.120211 | orchestrator | 2025-04-05 12:23:55.120224 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-04-05 12:23:55.120236 | orchestrator | Saturday 05 April 2025 12:22:44 +0000 (0:00:01.123) 0:00:52.762 ******** 2025-04-05 12:23:55.120249 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:23:55.120261 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:23:55.120273 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:23:55.120286 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:23:55.120298 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:23:55.120310 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:23:55.120323 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:23:55.120335 | orchestrator | 2025-04-05 12:23:55.120347 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-04-05 12:23:55.120367 | orchestrator | Saturday 05 April 2025 12:22:45 +0000 (0:00:00.873) 0:00:53.636 ******** 2025-04-05 12:23:55.120381 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.120394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.120423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.120436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.120453 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.120467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.120480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.120501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.120514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.120537 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.120551 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.120568 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.120582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.120595 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.120608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.120634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.120648 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.120666 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.120680 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.120693 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.120705 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.120718 | orchestrator | 2025-04-05 12:23:55.120730 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-04-05 12:23:55.120742 | orchestrator | Saturday 05 April 2025 12:22:50 +0000 (0:00:04.863) 0:00:58.500 ******** 2025-04-05 12:23:55.120755 | orchestrator | [WARNING]: Skipped 2025-04-05 12:23:55.120767 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-04-05 12:23:55.120794 | orchestrator | to this access issue: 2025-04-05 12:23:55.120807 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-04-05 12:23:55.120819 | orchestrator | directory 2025-04-05 12:23:55.120832 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-05 12:23:55.120850 | orchestrator | 2025-04-05 12:23:55.120863 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-04-05 12:23:55.120876 | orchestrator | Saturday 05 April 2025 12:22:51 +0000 (0:00:00.657) 0:00:59.158 ******** 2025-04-05 12:23:55.120888 | orchestrator | [WARNING]: Skipped 2025-04-05 12:23:55.120900 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-04-05 12:23:55.120913 | orchestrator | to this access issue: 2025-04-05 12:23:55.120925 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-04-05 12:23:55.120938 | orchestrator | directory 2025-04-05 12:23:55.120950 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-05 12:23:55.120962 | orchestrator | 2025-04-05 12:23:55.120974 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-04-05 12:23:55.120987 | orchestrator | Saturday 05 April 2025 12:22:51 +0000 (0:00:00.433) 0:00:59.591 ******** 2025-04-05 12:23:55.120999 | orchestrator | [WARNING]: Skipped 2025-04-05 12:23:55.121012 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-04-05 12:23:55.121024 | orchestrator | to this access issue: 2025-04-05 12:23:55.121036 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-04-05 12:23:55.121049 | orchestrator | directory 2025-04-05 12:23:55.121061 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-05 12:23:55.121073 | orchestrator | 2025-04-05 12:23:55.121086 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-04-05 12:23:55.121098 | orchestrator | Saturday 05 April 2025 12:22:51 +0000 (0:00:00.476) 0:01:00.068 ******** 2025-04-05 12:23:55.121111 | orchestrator | [WARNING]: Skipped 2025-04-05 12:23:55.121123 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-04-05 12:23:55.121135 | orchestrator | to this access issue: 2025-04-05 12:23:55.121148 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-04-05 12:23:55.121160 | orchestrator | directory 2025-04-05 12:23:55.121172 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-05 12:23:55.121185 | orchestrator | 2025-04-05 12:23:55.121197 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-04-05 12:23:55.121209 | orchestrator | Saturday 05 April 2025 12:22:52 +0000 (0:00:00.447) 0:01:00.515 ******** 2025-04-05 12:23:55.121222 | orchestrator | changed: [testbed-manager] 2025-04-05 12:23:55.121234 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:23:55.121246 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:23:55.121259 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:23:55.121271 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:23:55.121283 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:23:55.121296 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:23:55.121308 | orchestrator | 2025-04-05 12:23:55.121321 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-04-05 12:23:55.121333 | orchestrator | Saturday 05 April 2025 12:22:55 +0000 (0:00:03.121) 0:01:03.636 ******** 2025-04-05 12:23:55.121346 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-05 12:23:55.121358 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-05 12:23:55.121375 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-05 12:23:55.121393 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-05 12:23:55.121406 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-05 12:23:55.121418 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-05 12:23:55.121431 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-05 12:23:55.121450 | orchestrator | 2025-04-05 12:23:55.121462 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-04-05 12:23:55.121475 | orchestrator | Saturday 05 April 2025 12:22:57 +0000 (0:00:01.999) 0:01:05.636 ******** 2025-04-05 12:23:55.121487 | orchestrator | changed: [testbed-manager] 2025-04-05 12:23:55.121499 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:23:55.121512 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:23:55.121524 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:23:55.121537 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:23:55.121549 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:23:55.121561 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:23:55.121573 | orchestrator | 2025-04-05 12:23:55.121586 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-04-05 12:23:55.121598 | orchestrator | Saturday 05 April 2025 12:22:59 +0000 (0:00:01.762) 0:01:07.399 ******** 2025-04-05 12:23:55.121611 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.121624 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.121637 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.121654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.121667 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.121687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.121706 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.121719 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.121732 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.121744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.121758 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.121783 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.121801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.121826 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.121840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.121857 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.121870 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.121883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:23:55.121896 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.121909 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.121922 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.121939 | orchestrator | 2025-04-05 12:23:55.121952 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-04-05 12:23:55.121969 | orchestrator | Saturday 05 April 2025 12:23:00 +0000 (0:00:01.620) 0:01:09.020 ******** 2025-04-05 12:23:55.121981 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-05 12:23:55.121994 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-05 12:23:55.122006 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-05 12:23:55.122073 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-05 12:23:55.122089 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-05 12:23:55.122102 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-05 12:23:55.122114 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-05 12:23:55.122127 | orchestrator | 2025-04-05 12:23:55.122139 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-04-05 12:23:55.122151 | orchestrator | Saturday 05 April 2025 12:23:02 +0000 (0:00:01.662) 0:01:10.682 ******** 2025-04-05 12:23:55.122164 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-05 12:23:55.122176 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-05 12:23:55.122188 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-05 12:23:55.122201 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-05 12:23:55.122213 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-05 12:23:55.122225 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-05 12:23:55.122237 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-05 12:23:55.122250 | orchestrator | 2025-04-05 12:23:55.122262 | orchestrator | TASK [common : Check common containers] **************************************** 2025-04-05 12:23:55.122275 | orchestrator | Saturday 05 April 2025 12:23:04 +0000 (0:00:01.808) 0:01:12.491 ******** 2025-04-05 12:23:55.122287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.122300 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.122318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.122338 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.122351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.122370 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.122384 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.122397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.122410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.122427 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.122447 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-05 12:23:55.122460 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.122478 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.122492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.122505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.122518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.122531 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.122544 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.122565 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.122582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.122595 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:23:55.122608 | orchestrator | 2025-04-05 12:23:55.122660 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-04-05 12:23:55.122678 | orchestrator | Saturday 05 April 2025 12:23:07 +0000 (0:00:03.150) 0:01:15.642 ******** 2025-04-05 12:23:55.122691 | orchestrator | changed: [testbed-manager] 2025-04-05 12:23:55.122704 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:23:55.122716 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:23:55.122729 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:23:55.122741 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:23:55.122753 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:23:55.122766 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:23:55.122807 | orchestrator | 2025-04-05 12:23:55.122821 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-04-05 12:23:55.122833 | orchestrator | Saturday 05 April 2025 12:23:09 +0000 (0:00:01.612) 0:01:17.254 ******** 2025-04-05 12:23:55.122846 | orchestrator | changed: [testbed-manager] 2025-04-05 12:23:55.122858 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:23:55.122871 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:23:55.122883 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:23:55.122896 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:23:55.122908 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:23:55.122921 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:23:55.122933 | orchestrator | 2025-04-05 12:23:55.122945 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-05 12:23:55.122958 | orchestrator | Saturday 05 April 2025 12:23:10 +0000 (0:00:01.090) 0:01:18.345 ******** 2025-04-05 12:23:55.122970 | orchestrator | 2025-04-05 12:23:55.122982 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-05 12:23:55.122994 | orchestrator | Saturday 05 April 2025 12:23:10 +0000 (0:00:00.170) 0:01:18.516 ******** 2025-04-05 12:23:55.123007 | orchestrator | 2025-04-05 12:23:55.123019 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-05 12:23:55.123032 | orchestrator | Saturday 05 April 2025 12:23:10 +0000 (0:00:00.049) 0:01:18.565 ******** 2025-04-05 12:23:55.123044 | orchestrator | 2025-04-05 12:23:55.123056 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-05 12:23:55.123074 | orchestrator | Saturday 05 April 2025 12:23:10 +0000 (0:00:00.056) 0:01:18.621 ******** 2025-04-05 12:23:55.123087 | orchestrator | 2025-04-05 12:23:55.123099 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-05 12:23:55.123111 | orchestrator | Saturday 05 April 2025 12:23:10 +0000 (0:00:00.050) 0:01:18.672 ******** 2025-04-05 12:23:55.123123 | orchestrator | 2025-04-05 12:23:55.123135 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-05 12:23:55.123148 | orchestrator | Saturday 05 April 2025 12:23:10 +0000 (0:00:00.212) 0:01:18.884 ******** 2025-04-05 12:23:55.123160 | orchestrator | 2025-04-05 12:23:55.123172 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-05 12:23:55.123184 | orchestrator | Saturday 05 April 2025 12:23:10 +0000 (0:00:00.052) 0:01:18.937 ******** 2025-04-05 12:23:55.123196 | orchestrator | 2025-04-05 12:23:55.123209 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-04-05 12:23:55.123221 | orchestrator | Saturday 05 April 2025 12:23:10 +0000 (0:00:00.088) 0:01:19.025 ******** 2025-04-05 12:23:55.123233 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:23:55.123246 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:23:55.123259 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:23:55.123271 | orchestrator | changed: [testbed-manager] 2025-04-05 12:23:55.123284 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:23:55.123296 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:23:55.123308 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:23:55.123321 | orchestrator | 2025-04-05 12:23:55.123338 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-04-05 12:23:55.123350 | orchestrator | Saturday 05 April 2025 12:23:19 +0000 (0:00:08.600) 0:01:27.625 ******** 2025-04-05 12:23:55.123363 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:23:55.123375 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:23:55.123387 | orchestrator | changed: [testbed-manager] 2025-04-05 12:23:55.123400 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:23:55.123412 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:23:55.123424 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:23:55.123436 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:23:55.123449 | orchestrator | 2025-04-05 12:23:55.123461 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-04-05 12:23:55.123474 | orchestrator | Saturday 05 April 2025 12:23:42 +0000 (0:00:23.038) 0:01:50.664 ******** 2025-04-05 12:23:55.123486 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:23:55.123498 | orchestrator | ok: [testbed-manager] 2025-04-05 12:23:55.123511 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:23:55.123523 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:23:55.123535 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:23:55.123548 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:23:55.123560 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:23:55.123572 | orchestrator | 2025-04-05 12:23:55.123585 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-04-05 12:23:55.123597 | orchestrator | Saturday 05 April 2025 12:23:44 +0000 (0:00:02.381) 0:01:53.045 ******** 2025-04-05 12:23:55.123610 | orchestrator | changed: [testbed-manager] 2025-04-05 12:23:55.123622 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:23:55.123634 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:23:55.123647 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:23:55.123659 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:23:55.123671 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:23:55.123683 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:23:55.123696 | orchestrator | 2025-04-05 12:23:55.123708 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:23:55.123720 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-05 12:23:55.123734 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-05 12:23:55.123758 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-05 12:23:58.159688 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-05 12:23:58.159845 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-05 12:23:58.159867 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-05 12:23:58.159883 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-05 12:23:58.159898 | orchestrator | 2025-04-05 12:23:58.159913 | orchestrator | 2025-04-05 12:23:58.159929 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:23:58.159944 | orchestrator | Saturday 05 April 2025 12:23:54 +0000 (0:00:09.305) 0:02:02.351 ******** 2025-04-05 12:23:58.159958 | orchestrator | =============================================================================== 2025-04-05 12:23:58.159973 | orchestrator | common : Ensure fluentd image is present for label check --------------- 29.55s 2025-04-05 12:23:58.159987 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 23.04s 2025-04-05 12:23:58.160002 | orchestrator | common : Restart cron container ----------------------------------------- 9.31s 2025-04-05 12:23:58.160016 | orchestrator | common : Restart fluentd container -------------------------------------- 8.60s 2025-04-05 12:23:58.160030 | orchestrator | common : Copying over config.json files for services -------------------- 4.86s 2025-04-05 12:23:58.160043 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.59s 2025-04-05 12:23:58.160057 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.23s 2025-04-05 12:23:58.160071 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.15s 2025-04-05 12:23:58.160084 | orchestrator | common : Check common containers ---------------------------------------- 3.15s 2025-04-05 12:23:58.160098 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 3.12s 2025-04-05 12:23:58.160112 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 2.59s 2025-04-05 12:23:58.160125 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.38s 2025-04-05 12:23:58.160139 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.00s 2025-04-05 12:23:58.160153 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.81s 2025-04-05 12:23:58.160166 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.78s 2025-04-05 12:23:58.160181 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.76s 2025-04-05 12:23:58.160194 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.66s 2025-04-05 12:23:58.160208 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.62s 2025-04-05 12:23:58.160224 | orchestrator | common : Creating log volume -------------------------------------------- 1.61s 2025-04-05 12:23:58.160239 | orchestrator | common : include_tasks -------------------------------------------------- 1.39s 2025-04-05 12:23:58.160255 | orchestrator | 2025-04-05 12:23:55 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:23:58.160287 | orchestrator | 2025-04-05 12:23:58 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:23:58.160895 | orchestrator | 2025-04-05 12:23:58 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:23:58.161056 | orchestrator | 2025-04-05 12:23:58 | INFO  | Task b4a61426-6111-4624-af62-d068789cabd1 is in state STARTED 2025-04-05 12:23:58.162553 | orchestrator | 2025-04-05 12:23:58 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:23:58.164537 | orchestrator | 2025-04-05 12:23:58 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:23:58.165620 | orchestrator | 2025-04-05 12:23:58 | INFO  | Task 06f678d4-47c1-4e17-82ac-8185e8f32a87 is in state STARTED 2025-04-05 12:24:01.207456 | orchestrator | 2025-04-05 12:23:58 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:01.207592 | orchestrator | 2025-04-05 12:24:01 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:01.211893 | orchestrator | 2025-04-05 12:24:01 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:04.241218 | orchestrator | 2025-04-05 12:24:01 | INFO  | Task b4a61426-6111-4624-af62-d068789cabd1 is in state STARTED 2025-04-05 12:24:04.241322 | orchestrator | 2025-04-05 12:24:01 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:04.241338 | orchestrator | 2025-04-05 12:24:01 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:24:04.241353 | orchestrator | 2025-04-05 12:24:01 | INFO  | Task 06f678d4-47c1-4e17-82ac-8185e8f32a87 is in state STARTED 2025-04-05 12:24:04.241368 | orchestrator | 2025-04-05 12:24:01 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:04.241397 | orchestrator | 2025-04-05 12:24:04 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:04.243271 | orchestrator | 2025-04-05 12:24:04 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:04.244249 | orchestrator | 2025-04-05 12:24:04 | INFO  | Task b4a61426-6111-4624-af62-d068789cabd1 is in state STARTED 2025-04-05 12:24:04.245053 | orchestrator | 2025-04-05 12:24:04 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:04.245138 | orchestrator | 2025-04-05 12:24:04 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:24:04.246821 | orchestrator | 2025-04-05 12:24:04 | INFO  | Task 06f678d4-47c1-4e17-82ac-8185e8f32a87 is in state STARTED 2025-04-05 12:24:04.247626 | orchestrator | 2025-04-05 12:24:04 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:07.273887 | orchestrator | 2025-04-05 12:24:07 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:07.274153 | orchestrator | 2025-04-05 12:24:07 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:07.274966 | orchestrator | 2025-04-05 12:24:07 | INFO  | Task b4a61426-6111-4624-af62-d068789cabd1 is in state STARTED 2025-04-05 12:24:07.275680 | orchestrator | 2025-04-05 12:24:07 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:07.276592 | orchestrator | 2025-04-05 12:24:07 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:24:07.277191 | orchestrator | 2025-04-05 12:24:07 | INFO  | Task 06f678d4-47c1-4e17-82ac-8185e8f32a87 is in state STARTED 2025-04-05 12:24:07.277376 | orchestrator | 2025-04-05 12:24:07 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:10.312191 | orchestrator | 2025-04-05 12:24:10 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:10.312550 | orchestrator | 2025-04-05 12:24:10 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:10.312622 | orchestrator | 2025-04-05 12:24:10 | INFO  | Task b4a61426-6111-4624-af62-d068789cabd1 is in state STARTED 2025-04-05 12:24:10.313474 | orchestrator | 2025-04-05 12:24:10 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:10.321110 | orchestrator | 2025-04-05 12:24:10 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:24:13.352841 | orchestrator | 2025-04-05 12:24:10 | INFO  | Task 06f678d4-47c1-4e17-82ac-8185e8f32a87 is in state SUCCESS 2025-04-05 12:24:13.352965 | orchestrator | 2025-04-05 12:24:10 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:13.353002 | orchestrator | 2025-04-05 12:24:13 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:24:13.356738 | orchestrator | 2025-04-05 12:24:13 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:13.358903 | orchestrator | 2025-04-05 12:24:13 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:13.360872 | orchestrator | 2025-04-05 12:24:13 | INFO  | Task b4a61426-6111-4624-af62-d068789cabd1 is in state STARTED 2025-04-05 12:24:13.362820 | orchestrator | 2025-04-05 12:24:13 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:13.366948 | orchestrator | 2025-04-05 12:24:13 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:24:16.412258 | orchestrator | 2025-04-05 12:24:13 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:16.412382 | orchestrator | 2025-04-05 12:24:16 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:24:16.414139 | orchestrator | 2025-04-05 12:24:16 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:16.416810 | orchestrator | 2025-04-05 12:24:16 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:16.417307 | orchestrator | 2025-04-05 12:24:16 | INFO  | Task b4a61426-6111-4624-af62-d068789cabd1 is in state STARTED 2025-04-05 12:24:16.422362 | orchestrator | 2025-04-05 12:24:16 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:16.423213 | orchestrator | 2025-04-05 12:24:16 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:24:19.462013 | orchestrator | 2025-04-05 12:24:16 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:19.462186 | orchestrator | 2025-04-05 12:24:19 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:24:19.462496 | orchestrator | 2025-04-05 12:24:19 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:19.463678 | orchestrator | 2025-04-05 12:24:19 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:19.464516 | orchestrator | 2025-04-05 12:24:19 | INFO  | Task b4a61426-6111-4624-af62-d068789cabd1 is in state STARTED 2025-04-05 12:24:19.465079 | orchestrator | 2025-04-05 12:24:19 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:19.465842 | orchestrator | 2025-04-05 12:24:19 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:24:19.466311 | orchestrator | 2025-04-05 12:24:19 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:22.513096 | orchestrator | 2025-04-05 12:24:22 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:24:22.513541 | orchestrator | 2025-04-05 12:24:22 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:22.513583 | orchestrator | 2025-04-05 12:24:22 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:22.514555 | orchestrator | 2025-04-05 12:24:22 | INFO  | Task b4a61426-6111-4624-af62-d068789cabd1 is in state STARTED 2025-04-05 12:24:22.515060 | orchestrator | 2025-04-05 12:24:22 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:22.515090 | orchestrator | 2025-04-05 12:24:22 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:24:22.515306 | orchestrator | 2025-04-05 12:24:22 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:25.557815 | orchestrator | 2025-04-05 12:24:25 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:24:25.559231 | orchestrator | 2025-04-05 12:24:25 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:25.559266 | orchestrator | 2025-04-05 12:24:25 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:25.561462 | orchestrator | 2025-04-05 12:24:25 | INFO  | Task b4a61426-6111-4624-af62-d068789cabd1 is in state SUCCESS 2025-04-05 12:24:25.563193 | orchestrator | 2025-04-05 12:24:25.563235 | orchestrator | 2025-04-05 12:24:25.563247 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:24:25.563258 | orchestrator | 2025-04-05 12:24:25.563269 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:24:25.563285 | orchestrator | Saturday 05 April 2025 12:23:59 +0000 (0:00:00.447) 0:00:00.447 ******** 2025-04-05 12:24:25.563296 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:24:25.563309 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:24:25.563320 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:24:25.563331 | orchestrator | 2025-04-05 12:24:25.563341 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:24:25.563352 | orchestrator | Saturday 05 April 2025 12:24:00 +0000 (0:00:00.633) 0:00:01.080 ******** 2025-04-05 12:24:25.563363 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-04-05 12:24:25.563374 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-04-05 12:24:25.563385 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-04-05 12:24:25.563395 | orchestrator | 2025-04-05 12:24:25.563405 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-04-05 12:24:25.563416 | orchestrator | 2025-04-05 12:24:25.563426 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-04-05 12:24:25.563437 | orchestrator | Saturday 05 April 2025 12:24:00 +0000 (0:00:00.539) 0:00:01.620 ******** 2025-04-05 12:24:25.563448 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:24:25.563459 | orchestrator | 2025-04-05 12:24:25.563470 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-04-05 12:24:25.563481 | orchestrator | Saturday 05 April 2025 12:24:01 +0000 (0:00:00.621) 0:00:02.241 ******** 2025-04-05 12:24:25.563491 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-04-05 12:24:25.563502 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-04-05 12:24:25.563512 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-04-05 12:24:25.563523 | orchestrator | 2025-04-05 12:24:25.563533 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-04-05 12:24:25.563544 | orchestrator | Saturday 05 April 2025 12:24:02 +0000 (0:00:00.899) 0:00:03.140 ******** 2025-04-05 12:24:25.563555 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-04-05 12:24:25.563566 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-04-05 12:24:25.563577 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-04-05 12:24:25.563587 | orchestrator | 2025-04-05 12:24:25.563598 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-04-05 12:24:25.563625 | orchestrator | Saturday 05 April 2025 12:24:03 +0000 (0:00:01.892) 0:00:05.033 ******** 2025-04-05 12:24:25.563636 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:24:25.563650 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:24:25.563661 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:24:25.563671 | orchestrator | 2025-04-05 12:24:25.563682 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-04-05 12:24:25.563693 | orchestrator | Saturday 05 April 2025 12:24:06 +0000 (0:00:02.580) 0:00:07.613 ******** 2025-04-05 12:24:25.563703 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:24:25.563714 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:24:25.563724 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:24:25.563735 | orchestrator | 2025-04-05 12:24:25.563746 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:24:25.563757 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:24:25.563801 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:24:25.563814 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:24:25.563825 | orchestrator | 2025-04-05 12:24:25.563836 | orchestrator | 2025-04-05 12:24:25.563848 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:24:25.563859 | orchestrator | Saturday 05 April 2025 12:24:09 +0000 (0:00:03.202) 0:00:10.816 ******** 2025-04-05 12:24:25.563871 | orchestrator | =============================================================================== 2025-04-05 12:24:25.563883 | orchestrator | memcached : Restart memcached container --------------------------------- 3.20s 2025-04-05 12:24:25.563894 | orchestrator | memcached : Check memcached container ----------------------------------- 2.58s 2025-04-05 12:24:25.563905 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.89s 2025-04-05 12:24:25.563917 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.90s 2025-04-05 12:24:25.563928 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.63s 2025-04-05 12:24:25.563939 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.62s 2025-04-05 12:24:25.563951 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2025-04-05 12:24:25.563963 | orchestrator | 2025-04-05 12:24:25.563974 | orchestrator | 2025-04-05 12:24:25.563985 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:24:25.563997 | orchestrator | 2025-04-05 12:24:25.564008 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:24:25.564019 | orchestrator | Saturday 05 April 2025 12:23:59 +0000 (0:00:00.431) 0:00:00.431 ******** 2025-04-05 12:24:25.564031 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:24:25.564042 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:24:25.564054 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:24:25.564065 | orchestrator | 2025-04-05 12:24:25.564076 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:24:25.564095 | orchestrator | Saturday 05 April 2025 12:23:59 +0000 (0:00:00.414) 0:00:00.845 ******** 2025-04-05 12:24:25.564107 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-04-05 12:24:25.564119 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-04-05 12:24:25.564130 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-04-05 12:24:25.564141 | orchestrator | 2025-04-05 12:24:25.564151 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-04-05 12:24:25.564161 | orchestrator | 2025-04-05 12:24:25.564171 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-04-05 12:24:25.564183 | orchestrator | Saturday 05 April 2025 12:24:00 +0000 (0:00:00.449) 0:00:01.295 ******** 2025-04-05 12:24:25.564199 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:24:25.564209 | orchestrator | 2025-04-05 12:24:25.564219 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-04-05 12:24:25.564230 | orchestrator | Saturday 05 April 2025 12:24:00 +0000 (0:00:00.873) 0:00:02.169 ******** 2025-04-05 12:24:25.564241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564322 | orchestrator | 2025-04-05 12:24:25.564333 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-04-05 12:24:25.564343 | orchestrator | Saturday 05 April 2025 12:24:02 +0000 (0:00:01.810) 0:00:03.979 ******** 2025-04-05 12:24:25.564353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564426 | orchestrator | 2025-04-05 12:24:25.564437 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-04-05 12:24:25.564447 | orchestrator | Saturday 05 April 2025 12:24:05 +0000 (0:00:02.686) 0:00:06.665 ******** 2025-04-05 12:24:25.564458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564526 | orchestrator | 2025-04-05 12:24:25.564540 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-04-05 12:24:25.564551 | orchestrator | Saturday 05 April 2025 12:24:08 +0000 (0:00:02.590) 0:00:09.256 ******** 2025-04-05 12:24:25.564562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-05 12:24:25.564630 | orchestrator | 2025-04-05 12:24:25.564641 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-05 12:24:25.564651 | orchestrator | Saturday 05 April 2025 12:24:10 +0000 (0:00:01.968) 0:00:11.225 ******** 2025-04-05 12:24:25.564661 | orchestrator | 2025-04-05 12:24:25.564672 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-05 12:24:25.564685 | orchestrator | Saturday 05 April 2025 12:24:10 +0000 (0:00:00.066) 0:00:11.291 ******** 2025-04-05 12:24:25.564913 | orchestrator | 2025-04-05 12:24:25.564934 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-05 12:24:25.564944 | orchestrator | Saturday 05 April 2025 12:24:10 +0000 (0:00:00.058) 0:00:11.350 ******** 2025-04-05 12:24:25.564955 | orchestrator | 2025-04-05 12:24:25.564965 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-04-05 12:24:25.564975 | orchestrator | Saturday 05 April 2025 12:24:10 +0000 (0:00:00.168) 0:00:11.518 ******** 2025-04-05 12:24:25.564985 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:24:25.564995 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:24:25.565005 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:24:25.565016 | orchestrator | 2025-04-05 12:24:25.565026 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-04-05 12:24:25.565036 | orchestrator | Saturday 05 April 2025 12:24:18 +0000 (0:00:08.205) 0:00:19.723 ******** 2025-04-05 12:24:25.565047 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:24:25.565062 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:24:25.565073 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:24:25.565083 | orchestrator | 2025-04-05 12:24:25.565093 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:24:25.565104 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:24:25.565114 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:24:25.565124 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:24:25.565135 | orchestrator | 2025-04-05 12:24:25.565145 | orchestrator | 2025-04-05 12:24:25.565155 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:24:25.565165 | orchestrator | Saturday 05 April 2025 12:24:22 +0000 (0:00:04.260) 0:00:23.984 ******** 2025-04-05 12:24:25.565175 | orchestrator | =============================================================================== 2025-04-05 12:24:25.565185 | orchestrator | redis : Restart redis container ----------------------------------------- 8.21s 2025-04-05 12:24:25.565195 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.26s 2025-04-05 12:24:25.565205 | orchestrator | redis : Copying over default config.json files -------------------------- 2.69s 2025-04-05 12:24:25.565215 | orchestrator | redis : Copying over redis config files --------------------------------- 2.59s 2025-04-05 12:24:25.565225 | orchestrator | redis : Check redis containers ------------------------------------------ 1.97s 2025-04-05 12:24:25.565235 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.81s 2025-04-05 12:24:25.565245 | orchestrator | redis : include_tasks --------------------------------------------------- 0.87s 2025-04-05 12:24:25.565255 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-04-05 12:24:25.565269 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2025-04-05 12:24:25.565280 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.29s 2025-04-05 12:24:25.565301 | orchestrator | 2025-04-05 12:24:25 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:25.566395 | orchestrator | 2025-04-05 12:24:25 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:24:25.566595 | orchestrator | 2025-04-05 12:24:25 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:28.603220 | orchestrator | 2025-04-05 12:24:28 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:24:28.604477 | orchestrator | 2025-04-05 12:24:28 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:28.604818 | orchestrator | 2025-04-05 12:24:28 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:28.606365 | orchestrator | 2025-04-05 12:24:28 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:28.609243 | orchestrator | 2025-04-05 12:24:28 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:24:31.648880 | orchestrator | 2025-04-05 12:24:28 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:31.649021 | orchestrator | 2025-04-05 12:24:31 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:24:31.649249 | orchestrator | 2025-04-05 12:24:31 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:31.649279 | orchestrator | 2025-04-05 12:24:31 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:31.649300 | orchestrator | 2025-04-05 12:24:31 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:31.652414 | orchestrator | 2025-04-05 12:24:31 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:24:34.704008 | orchestrator | 2025-04-05 12:24:31 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:34.704138 | orchestrator | 2025-04-05 12:24:34 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:24:37.722538 | orchestrator | 2025-04-05 12:24:34 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:37.722664 | orchestrator | 2025-04-05 12:24:34 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:37.722684 | orchestrator | 2025-04-05 12:24:34 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:37.722700 | orchestrator | 2025-04-05 12:24:34 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:24:37.722716 | orchestrator | 2025-04-05 12:24:34 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:37.722751 | orchestrator | 2025-04-05 12:24:37 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:24:37.722880 | orchestrator | 2025-04-05 12:24:37 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:37.723116 | orchestrator | 2025-04-05 12:24:37 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:37.727208 | orchestrator | 2025-04-05 12:24:37 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:37.727717 | orchestrator | 2025-04-05 12:24:37 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:24:40.752616 | orchestrator | 2025-04-05 12:24:37 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:40.752859 | orchestrator | 2025-04-05 12:24:40 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:24:40.752987 | orchestrator | 2025-04-05 12:24:40 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:40.753588 | orchestrator | 2025-04-05 12:24:40 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:40.754270 | orchestrator | 2025-04-05 12:24:40 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:40.756100 | orchestrator | 2025-04-05 12:24:40 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:24:43.784567 | orchestrator | 2025-04-05 12:24:40 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:43.784696 | orchestrator | 2025-04-05 12:24:43 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:24:43.784993 | orchestrator | 2025-04-05 12:24:43 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:43.785796 | orchestrator | 2025-04-05 12:24:43 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:43.786657 | orchestrator | 2025-04-05 12:24:43 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:43.787452 | orchestrator | 2025-04-05 12:24:43 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:24:43.787585 | orchestrator | 2025-04-05 12:24:43 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:46.836357 | orchestrator | 2025-04-05 12:24:46 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:24:46.837375 | orchestrator | 2025-04-05 12:24:46 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:46.838557 | orchestrator | 2025-04-05 12:24:46 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:46.839700 | orchestrator | 2025-04-05 12:24:46 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:46.840893 | orchestrator | 2025-04-05 12:24:46 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:24:49.871636 | orchestrator | 2025-04-05 12:24:46 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:49.871816 | orchestrator | 2025-04-05 12:24:49 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:24:49.873058 | orchestrator | 2025-04-05 12:24:49 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:49.874060 | orchestrator | 2025-04-05 12:24:49 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:49.875135 | orchestrator | 2025-04-05 12:24:49 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:49.876408 | orchestrator | 2025-04-05 12:24:49 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:24:49.877387 | orchestrator | 2025-04-05 12:24:49 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:52.913721 | orchestrator | 2025-04-05 12:24:52 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:24:52.914628 | orchestrator | 2025-04-05 12:24:52 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:52.916328 | orchestrator | 2025-04-05 12:24:52 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:52.917099 | orchestrator | 2025-04-05 12:24:52 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:52.919218 | orchestrator | 2025-04-05 12:24:52 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:24:55.955825 | orchestrator | 2025-04-05 12:24:52 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:55.955969 | orchestrator | 2025-04-05 12:24:55 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:24:55.957614 | orchestrator | 2025-04-05 12:24:55 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:55.957928 | orchestrator | 2025-04-05 12:24:55 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:55.957957 | orchestrator | 2025-04-05 12:24:55 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:55.957977 | orchestrator | 2025-04-05 12:24:55 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:24:58.997256 | orchestrator | 2025-04-05 12:24:55 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:24:58.997393 | orchestrator | 2025-04-05 12:24:58 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:24:58.998285 | orchestrator | 2025-04-05 12:24:58 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:24:58.999720 | orchestrator | 2025-04-05 12:24:58 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:24:59.001574 | orchestrator | 2025-04-05 12:24:59 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:24:59.005655 | orchestrator | 2025-04-05 12:24:59 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:25:02.052826 | orchestrator | 2025-04-05 12:24:59 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:02.052956 | orchestrator | 2025-04-05 12:25:02 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:02.053939 | orchestrator | 2025-04-05 12:25:02 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state STARTED 2025-04-05 12:25:02.054999 | orchestrator | 2025-04-05 12:25:02 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:02.056371 | orchestrator | 2025-04-05 12:25:02 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:02.057859 | orchestrator | 2025-04-05 12:25:02 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:25:05.096701 | orchestrator | 2025-04-05 12:25:02 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:05.096885 | orchestrator | 2025-04-05 12:25:05 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:05.097339 | orchestrator | 2025-04-05 12:25:05 | INFO  | Task df836a6d-011b-4c45-ab86-76999b3d9f8e is in state SUCCESS 2025-04-05 12:25:05.099357 | orchestrator | 2025-04-05 12:25:05.099406 | orchestrator | 2025-04-05 12:25:05.099420 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:25:05.099433 | orchestrator | 2025-04-05 12:25:05.099446 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:25:05.099459 | orchestrator | Saturday 05 April 2025 12:23:59 +0000 (0:00:00.240) 0:00:00.240 ******** 2025-04-05 12:25:05.099471 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:25:05.099492 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:25:05.099508 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:25:05.099521 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:25:05.099533 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:25:05.099547 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:25:05.099560 | orchestrator | 2025-04-05 12:25:05.099572 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:25:05.099585 | orchestrator | Saturday 05 April 2025 12:24:00 +0000 (0:00:00.851) 0:00:01.091 ******** 2025-04-05 12:25:05.099616 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-05 12:25:05.099629 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-05 12:25:05.099641 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-05 12:25:05.099653 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-05 12:25:05.099666 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-05 12:25:05.099678 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-05 12:25:05.099690 | orchestrator | 2025-04-05 12:25:05.099702 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-04-05 12:25:05.099715 | orchestrator | 2025-04-05 12:25:05.099727 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-04-05 12:25:05.099739 | orchestrator | Saturday 05 April 2025 12:24:00 +0000 (0:00:00.749) 0:00:01.840 ******** 2025-04-05 12:25:05.099752 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:25:05.099792 | orchestrator | 2025-04-05 12:25:05.099805 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-05 12:25:05.099818 | orchestrator | Saturday 05 April 2025 12:24:02 +0000 (0:00:01.396) 0:00:03.236 ******** 2025-04-05 12:25:05.099831 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-04-05 12:25:05.099843 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-04-05 12:25:05.099856 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-04-05 12:25:05.099868 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-04-05 12:25:05.099880 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-04-05 12:25:05.099893 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-04-05 12:25:05.099905 | orchestrator | 2025-04-05 12:25:05.099918 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-05 12:25:05.099932 | orchestrator | Saturday 05 April 2025 12:24:03 +0000 (0:00:01.210) 0:00:04.447 ******** 2025-04-05 12:25:05.099946 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-04-05 12:25:05.099960 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-04-05 12:25:05.099974 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-04-05 12:25:05.099988 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-04-05 12:25:05.100003 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-04-05 12:25:05.100018 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-04-05 12:25:05.100039 | orchestrator | 2025-04-05 12:25:05.100054 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-05 12:25:05.100068 | orchestrator | Saturday 05 April 2025 12:24:05 +0000 (0:00:02.076) 0:00:06.523 ******** 2025-04-05 12:25:05.100081 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-04-05 12:25:05.100096 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:05.100111 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-04-05 12:25:05.100125 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:05.100140 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-04-05 12:25:05.100154 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:05.100169 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-04-05 12:25:05.100182 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:05.100196 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-04-05 12:25:05.100210 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:05.100224 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-04-05 12:25:05.100238 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:05.100252 | orchestrator | 2025-04-05 12:25:05.100266 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-04-05 12:25:05.100286 | orchestrator | Saturday 05 April 2025 12:24:07 +0000 (0:00:01.452) 0:00:07.976 ******** 2025-04-05 12:25:05.100298 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:05.100311 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:05.100323 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:05.100335 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:05.100347 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:05.100360 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:05.100372 | orchestrator | 2025-04-05 12:25:05.100384 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-04-05 12:25:05.100397 | orchestrator | Saturday 05 April 2025 12:24:07 +0000 (0:00:00.535) 0:00:08.511 ******** 2025-04-05 12:25:05.100422 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100465 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100518 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100609 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100660 | orchestrator | 2025-04-05 12:25:05.100673 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-04-05 12:25:05.100686 | orchestrator | Saturday 05 April 2025 12:24:09 +0000 (0:00:01.603) 0:00:10.114 ******** 2025-04-05 12:25:05.100699 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100712 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100726 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100831 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100863 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100877 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-05 12:25:05.100945 | orchestrator | 2025-04-05 12:25:05.100957 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-04-05 12:25:05.100970 | orchestrator | Saturday 05 April 2025 12:24:11 +0000 (0:00:02.295) 0:00:12.410 ******** 2025-04-05 12:25:05.100982 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:25:05.100995 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:25:05.101007 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:25:05.101020 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:05.101032 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:25:05.101045 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:25:05.101057 | orchestrator | 2025-04-05 12:25:05.101070 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-04-05 12:25:05.101082 | orchestrator | Saturday 05 April 2025 12:24:13 +0000 (0:00:02.307) 0:00:14.718 ******** 2025-04-05 12:25:05.101094 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:25:05.101107 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:25:05.101119 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:05.101131 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:25:05.101143 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:25:05.101155 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:25:05.101168 | orchestrator | 2025-04-05 12:25:05.101184 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-04-05 12:25:05.101197 | orchestrator | Saturday 05 April 2025 12:24:16 +0000 (0:00:02.232) 0:00:16.950 ******** 2025-04-05 12:25:05.101209 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:05.101221 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:05.101234 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:05.101246 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:05.101258 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:05.101276 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:05.101288 | orchestrator | 2025-04-05 12:25:05.101301 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-04-05 12:25:05.101313 | orchestrator | Saturday 05 April 2025 12:24:17 +0000 (0:00:01.361) 0:00:18.312 ******** 2025-04-05 12:25:05.101326 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-05 12:25:05.101340 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-05 12:25:05.101361 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-05 12:25:05.101380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-05 12:25:05.101394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-05 12:25:05.101407 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-05 12:25:05.101426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-05 12:25:05.101483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-05 12:25:05.101499 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-05 12:25:05.101520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-05 12:25:05.101533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-05 12:25:05.101561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-05 12:25:05.101575 | orchestrator | 2025-04-05 12:25:05.101587 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-05 12:25:05.101600 | orchestrator | Saturday 05 April 2025 12:24:19 +0000 (0:00:02.521) 0:00:20.834 ******** 2025-04-05 12:25:05.101613 | orchestrator | 2025-04-05 12:25:05.101625 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-05 12:25:05.101638 | orchestrator | Saturday 05 April 2025 12:24:20 +0000 (0:00:00.192) 0:00:21.026 ******** 2025-04-05 12:25:05.101650 | orchestrator | 2025-04-05 12:25:05.101663 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-05 12:25:05.101675 | orchestrator | Saturday 05 April 2025 12:24:20 +0000 (0:00:00.209) 0:00:21.235 ******** 2025-04-05 12:25:05.101688 | orchestrator | 2025-04-05 12:25:05.101700 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-05 12:25:05.101713 | orchestrator | Saturday 05 April 2025 12:24:20 +0000 (0:00:00.109) 0:00:21.345 ******** 2025-04-05 12:25:05.101725 | orchestrator | 2025-04-05 12:25:05.101738 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-05 12:25:05.101750 | orchestrator | Saturday 05 April 2025 12:24:20 +0000 (0:00:00.280) 0:00:21.626 ******** 2025-04-05 12:25:05.101763 | orchestrator | 2025-04-05 12:25:05.101826 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-05 12:25:05.101839 | orchestrator | Saturday 05 April 2025 12:24:20 +0000 (0:00:00.080) 0:00:21.706 ******** 2025-04-05 12:25:05.101852 | orchestrator | 2025-04-05 12:25:05.101864 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-04-05 12:25:05.101877 | orchestrator | Saturday 05 April 2025 12:24:20 +0000 (0:00:00.181) 0:00:21.887 ******** 2025-04-05 12:25:05.101889 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:25:05.101902 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:05.101914 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:25:05.101927 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:25:05.101939 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:25:05.101952 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:25:05.101965 | orchestrator | 2025-04-05 12:25:05.101977 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-04-05 12:25:05.101990 | orchestrator | Saturday 05 April 2025 12:24:30 +0000 (0:00:09.326) 0:00:31.214 ******** 2025-04-05 12:25:05.102003 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:25:05.102082 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:25:05.102098 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:25:05.102111 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:25:05.102123 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:25:05.102136 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:25:05.102148 | orchestrator | 2025-04-05 12:25:05.102161 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-04-05 12:25:05.102173 | orchestrator | Saturday 05 April 2025 12:24:32 +0000 (0:00:02.208) 0:00:33.422 ******** 2025-04-05 12:25:05.102186 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:25:05.102199 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:25:05.102220 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:05.102242 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:25:05.102255 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:25:05.102267 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:25:05.102280 | orchestrator | 2025-04-05 12:25:05.102300 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-04-05 12:25:05.102313 | orchestrator | Saturday 05 April 2025 12:24:40 +0000 (0:00:08.216) 0:00:41.639 ******** 2025-04-05 12:25:05.102326 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-04-05 12:25:05.102339 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-04-05 12:25:05.102352 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-04-05 12:25:05.102364 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-04-05 12:25:05.102377 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-04-05 12:25:05.102389 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-04-05 12:25:05.102401 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-04-05 12:25:05.102414 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-04-05 12:25:05.102426 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-04-05 12:25:05.102438 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-04-05 12:25:05.102450 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-04-05 12:25:05.102462 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-04-05 12:25:05.102475 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-05 12:25:05.102487 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-05 12:25:05.102499 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-05 12:25:05.102511 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-05 12:25:05.102524 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-05 12:25:05.102541 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-05 12:25:05.102553 | orchestrator | 2025-04-05 12:25:05.102566 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-04-05 12:25:05.102578 | orchestrator | Saturday 05 April 2025 12:24:48 +0000 (0:00:07.852) 0:00:49.492 ******** 2025-04-05 12:25:05.102590 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-04-05 12:25:05.102603 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:05.102615 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-04-05 12:25:05.102628 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:05.102640 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-04-05 12:25:05.102653 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:05.102665 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-04-05 12:25:05.102677 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-04-05 12:25:05.102690 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-04-05 12:25:05.102707 | orchestrator | 2025-04-05 12:25:05.102720 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-04-05 12:25:05.102733 | orchestrator | Saturday 05 April 2025 12:24:51 +0000 (0:00:02.968) 0:00:52.460 ******** 2025-04-05 12:25:05.102745 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-04-05 12:25:05.102758 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:05.102789 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-04-05 12:25:05.102802 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:05.102814 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-04-05 12:25:05.102827 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:05.102839 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-04-05 12:25:05.102852 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-04-05 12:25:05.102864 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-04-05 12:25:05.102876 | orchestrator | 2025-04-05 12:25:05.102889 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-04-05 12:25:05.102901 | orchestrator | Saturday 05 April 2025 12:24:55 +0000 (0:00:03.938) 0:00:56.398 ******** 2025-04-05 12:25:05.102913 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:25:05.102926 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:25:05.102938 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:25:05.102993 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:05.103006 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:25:05.103019 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:25:05.103031 | orchestrator | 2025-04-05 12:25:05.103043 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:25:05.103063 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-05 12:25:05.103193 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-05 12:25:05.103211 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-05 12:25:05.103224 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-05 12:25:05.103236 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-05 12:25:05.103254 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-05 12:25:05.103267 | orchestrator | 2025-04-05 12:25:05.103279 | orchestrator | 2025-04-05 12:25:05.103292 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:25:05.103308 | orchestrator | Saturday 05 April 2025 12:25:03 +0000 (0:00:07.717) 0:01:04.115 ******** 2025-04-05 12:25:05.103321 | orchestrator | =============================================================================== 2025-04-05 12:25:05.103334 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.94s 2025-04-05 12:25:05.103346 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.33s 2025-04-05 12:25:05.103358 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.85s 2025-04-05 12:25:05.103371 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.94s 2025-04-05 12:25:05.103383 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.97s 2025-04-05 12:25:05.103395 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.52s 2025-04-05 12:25:05.103408 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 2.31s 2025-04-05 12:25:05.103428 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.30s 2025-04-05 12:25:05.103440 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 2.23s 2025-04-05 12:25:05.103452 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.21s 2025-04-05 12:25:05.103464 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.08s 2025-04-05 12:25:05.103477 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.60s 2025-04-05 12:25:05.103489 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.45s 2025-04-05 12:25:05.103501 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.40s 2025-04-05 12:25:05.103514 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.36s 2025-04-05 12:25:05.103526 | orchestrator | module-load : Load modules ---------------------------------------------- 1.21s 2025-04-05 12:25:05.103538 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.05s 2025-04-05 12:25:05.103551 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.85s 2025-04-05 12:25:05.103563 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2025-04-05 12:25:05.103575 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.54s 2025-04-05 12:25:05.103587 | orchestrator | 2025-04-05 12:25:05 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:05.103599 | orchestrator | 2025-04-05 12:25:05 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:05.103612 | orchestrator | 2025-04-05 12:25:05 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:25:05.103629 | orchestrator | 2025-04-05 12:25:05 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:25:08.137443 | orchestrator | 2025-04-05 12:25:05 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:08.137573 | orchestrator | 2025-04-05 12:25:08 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:08.138008 | orchestrator | 2025-04-05 12:25:08 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:08.138497 | orchestrator | 2025-04-05 12:25:08 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:08.141985 | orchestrator | 2025-04-05 12:25:08 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:25:08.142456 | orchestrator | 2025-04-05 12:25:08 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:25:11.188478 | orchestrator | 2025-04-05 12:25:08 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:11.188605 | orchestrator | 2025-04-05 12:25:11 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:11.191750 | orchestrator | 2025-04-05 12:25:11 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:14.226348 | orchestrator | 2025-04-05 12:25:11 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:14.226452 | orchestrator | 2025-04-05 12:25:11 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:25:14.226468 | orchestrator | 2025-04-05 12:25:11 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:25:14.226482 | orchestrator | 2025-04-05 12:25:11 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:14.226510 | orchestrator | 2025-04-05 12:25:14 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:14.229666 | orchestrator | 2025-04-05 12:25:14 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:14.230627 | orchestrator | 2025-04-05 12:25:14 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:14.231298 | orchestrator | 2025-04-05 12:25:14 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:25:14.232276 | orchestrator | 2025-04-05 12:25:14 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:25:14.233713 | orchestrator | 2025-04-05 12:25:14 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:17.271279 | orchestrator | 2025-04-05 12:25:17 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:17.272242 | orchestrator | 2025-04-05 12:25:17 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:17.274711 | orchestrator | 2025-04-05 12:25:17 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:20.304038 | orchestrator | 2025-04-05 12:25:17 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:25:20.304149 | orchestrator | 2025-04-05 12:25:17 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:25:20.304169 | orchestrator | 2025-04-05 12:25:17 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:20.304201 | orchestrator | 2025-04-05 12:25:20 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:20.306541 | orchestrator | 2025-04-05 12:25:20 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:20.308029 | orchestrator | 2025-04-05 12:25:20 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:20.308646 | orchestrator | 2025-04-05 12:25:20 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:25:20.309246 | orchestrator | 2025-04-05 12:25:20 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:25:20.309353 | orchestrator | 2025-04-05 12:25:20 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:23.334970 | orchestrator | 2025-04-05 12:25:23 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:23.335555 | orchestrator | 2025-04-05 12:25:23 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:23.335599 | orchestrator | 2025-04-05 12:25:23 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:23.336140 | orchestrator | 2025-04-05 12:25:23 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:25:23.336975 | orchestrator | 2025-04-05 12:25:23 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:25:26.378138 | orchestrator | 2025-04-05 12:25:23 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:26.378258 | orchestrator | 2025-04-05 12:25:26 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:26.378407 | orchestrator | 2025-04-05 12:25:26 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:26.378432 | orchestrator | 2025-04-05 12:25:26 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:26.379221 | orchestrator | 2025-04-05 12:25:26 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:25:26.379251 | orchestrator | 2025-04-05 12:25:26 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:25:29.428692 | orchestrator | 2025-04-05 12:25:26 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:29.428893 | orchestrator | 2025-04-05 12:25:29 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:29.429141 | orchestrator | 2025-04-05 12:25:29 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:29.429175 | orchestrator | 2025-04-05 12:25:29 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:29.429696 | orchestrator | 2025-04-05 12:25:29 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:25:29.430328 | orchestrator | 2025-04-05 12:25:29 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:25:32.473140 | orchestrator | 2025-04-05 12:25:29 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:32.473263 | orchestrator | 2025-04-05 12:25:32 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:32.473898 | orchestrator | 2025-04-05 12:25:32 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:32.475933 | orchestrator | 2025-04-05 12:25:32 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:32.485755 | orchestrator | 2025-04-05 12:25:32 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:25:32.488921 | orchestrator | 2025-04-05 12:25:32 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:25:35.526360 | orchestrator | 2025-04-05 12:25:32 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:35.526528 | orchestrator | 2025-04-05 12:25:35 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:35.526610 | orchestrator | 2025-04-05 12:25:35 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:35.527699 | orchestrator | 2025-04-05 12:25:35 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:35.528195 | orchestrator | 2025-04-05 12:25:35 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:25:35.529893 | orchestrator | 2025-04-05 12:25:35 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:25:38.558406 | orchestrator | 2025-04-05 12:25:35 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:38.558577 | orchestrator | 2025-04-05 12:25:38 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:38.558663 | orchestrator | 2025-04-05 12:25:38 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:38.559148 | orchestrator | 2025-04-05 12:25:38 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:38.560608 | orchestrator | 2025-04-05 12:25:38 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:25:38.563123 | orchestrator | 2025-04-05 12:25:38 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state STARTED 2025-04-05 12:25:41.605554 | orchestrator | 2025-04-05 12:25:38 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:41.605704 | orchestrator | 2025-04-05 12:25:41 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:41.606123 | orchestrator | 2025-04-05 12:25:41 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:41.607319 | orchestrator | 2025-04-05 12:25:41 | INFO  | Task 8bffd860-fb0a-40cd-a7bf-435615552957 is in state STARTED 2025-04-05 12:25:41.608410 | orchestrator | 2025-04-05 12:25:41 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:41.610561 | orchestrator | 2025-04-05 12:25:41 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:25:41.612722 | orchestrator | 2025-04-05 12:25:41 | INFO  | Task 52444129-8fa6-4383-b845-dc2969382689 is in state SUCCESS 2025-04-05 12:25:41.613122 | orchestrator | 2025-04-05 12:25:41.614614 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.18.4 2025-04-05 12:25:41.614656 | orchestrator | 2025-04-05 12:25:41.614673 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-04-05 12:25:41.614689 | orchestrator | 2025-04-05 12:25:41.614704 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-04-05 12:25:41.614738 | orchestrator | Saturday 05 April 2025 12:21:53 +0000 (0:00:00.174) 0:00:00.174 ******** 2025-04-05 12:25:41.614754 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:25:41.614811 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:25:41.614847 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:25:41.614863 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:25:41.614879 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:25:41.614894 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:25:41.614909 | orchestrator | 2025-04-05 12:25:41.614924 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-04-05 12:25:41.614940 | orchestrator | Saturday 05 April 2025 12:21:54 +0000 (0:00:00.686) 0:00:00.860 ******** 2025-04-05 12:25:41.614955 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:41.614971 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:41.614986 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:41.615001 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.615016 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.615031 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.615046 | orchestrator | 2025-04-05 12:25:41.615062 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-04-05 12:25:41.615077 | orchestrator | Saturday 05 April 2025 12:21:54 +0000 (0:00:00.664) 0:00:01.525 ******** 2025-04-05 12:25:41.615092 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:41.615107 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:41.615122 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:41.615137 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.615152 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.615168 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.615183 | orchestrator | 2025-04-05 12:25:41.615201 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-04-05 12:25:41.615218 | orchestrator | Saturday 05 April 2025 12:21:55 +0000 (0:00:00.795) 0:00:02.321 ******** 2025-04-05 12:25:41.615235 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:25:41.615252 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:25:41.615269 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:41.615285 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:25:41.615302 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:25:41.615318 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:25:41.615335 | orchestrator | 2025-04-05 12:25:41.615367 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-04-05 12:25:41.615384 | orchestrator | Saturday 05 April 2025 12:21:57 +0000 (0:00:02.403) 0:00:04.725 ******** 2025-04-05 12:25:41.615399 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:25:41.615415 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:25:41.615430 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:25:41.615445 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:41.615461 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:25:41.615477 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:25:41.615492 | orchestrator | 2025-04-05 12:25:41.615508 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-04-05 12:25:41.615523 | orchestrator | Saturday 05 April 2025 12:22:00 +0000 (0:00:02.256) 0:00:06.981 ******** 2025-04-05 12:25:41.615553 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:25:41.615569 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:25:41.615585 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:25:41.615599 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:25:41.615619 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:25:41.615633 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:41.615647 | orchestrator | 2025-04-05 12:25:41.615662 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-04-05 12:25:41.615676 | orchestrator | Saturday 05 April 2025 12:22:02 +0000 (0:00:02.065) 0:00:09.046 ******** 2025-04-05 12:25:41.615690 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:41.615704 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:41.615718 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:41.615731 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.615745 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.615759 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.615791 | orchestrator | 2025-04-05 12:25:41.615806 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-04-05 12:25:41.615820 | orchestrator | Saturday 05 April 2025 12:22:02 +0000 (0:00:00.696) 0:00:09.743 ******** 2025-04-05 12:25:41.615834 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:41.615848 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:41.615862 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:41.615876 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.615889 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.615903 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.615917 | orchestrator | 2025-04-05 12:25:41.615931 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-04-05 12:25:41.615945 | orchestrator | Saturday 05 April 2025 12:22:03 +0000 (0:00:00.614) 0:00:10.357 ******** 2025-04-05 12:25:41.615959 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-05 12:25:41.615973 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-05 12:25:41.615987 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:41.616001 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-05 12:25:41.616015 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-05 12:25:41.616029 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:41.616044 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-05 12:25:41.616058 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-05 12:25:41.616071 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:41.616095 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-05 12:25:41.616110 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-05 12:25:41.616124 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.616138 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-05 12:25:41.616151 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-05 12:25:41.616165 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.616179 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-05 12:25:41.616193 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-05 12:25:41.616206 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.616220 | orchestrator | 2025-04-05 12:25:41.616234 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-04-05 12:25:41.616248 | orchestrator | Saturday 05 April 2025 12:22:04 +0000 (0:00:00.952) 0:00:11.310 ******** 2025-04-05 12:25:41.616262 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:41.616275 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:41.616296 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:41.616310 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.616324 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.616338 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.616351 | orchestrator | 2025-04-05 12:25:41.616365 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-04-05 12:25:41.616381 | orchestrator | Saturday 05 April 2025 12:22:05 +0000 (0:00:01.313) 0:00:12.624 ******** 2025-04-05 12:25:41.616394 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:25:41.616408 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:25:41.616422 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:25:41.616436 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:25:41.616449 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:25:41.616463 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:25:41.616477 | orchestrator | 2025-04-05 12:25:41.616491 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-04-05 12:25:41.616505 | orchestrator | Saturday 05 April 2025 12:22:07 +0000 (0:00:01.165) 0:00:13.789 ******** 2025-04-05 12:25:41.616520 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:25:41.616533 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:25:41.616547 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:25:41.616561 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:41.616575 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:25:41.616589 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:25:41.616603 | orchestrator | 2025-04-05 12:25:41.616617 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-04-05 12:25:41.616631 | orchestrator | Saturday 05 April 2025 12:22:14 +0000 (0:00:07.261) 0:00:21.050 ******** 2025-04-05 12:25:41.616645 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:41.616658 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:41.616672 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:41.616686 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.616700 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.616714 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.616727 | orchestrator | 2025-04-05 12:25:41.616741 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-04-05 12:25:41.616755 | orchestrator | Saturday 05 April 2025 12:22:15 +0000 (0:00:01.120) 0:00:22.171 ******** 2025-04-05 12:25:41.616786 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:41.616800 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:41.616814 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:41.616828 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.616842 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.616855 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.616869 | orchestrator | 2025-04-05 12:25:41.616883 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-04-05 12:25:41.616904 | orchestrator | Saturday 05 April 2025 12:22:17 +0000 (0:00:01.628) 0:00:23.800 ******** 2025-04-05 12:25:41.616920 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:41.616941 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:41.616956 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:41.616972 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.616987 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.617002 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.617017 | orchestrator | 2025-04-05 12:25:41.617032 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-04-05 12:25:41.617047 | orchestrator | Saturday 05 April 2025 12:22:17 +0000 (0:00:00.852) 0:00:24.652 ******** 2025-04-05 12:25:41.617062 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-04-05 12:25:41.617078 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-04-05 12:25:41.617093 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:41.617109 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-04-05 12:25:41.617130 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-04-05 12:25:41.617145 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:41.617161 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-04-05 12:25:41.617176 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-04-05 12:25:41.617191 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:41.617206 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-04-05 12:25:41.617221 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-04-05 12:25:41.617237 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.617252 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-04-05 12:25:41.617267 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-04-05 12:25:41.617282 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.617297 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-04-05 12:25:41.617313 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-04-05 12:25:41.617328 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.617343 | orchestrator | 2025-04-05 12:25:41.617365 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-04-05 12:25:41.617381 | orchestrator | Saturday 05 April 2025 12:22:18 +0000 (0:00:00.936) 0:00:25.589 ******** 2025-04-05 12:25:41.617396 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:41.617411 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:41.617426 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:41.617441 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.617456 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.617471 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.617486 | orchestrator | 2025-04-05 12:25:41.617501 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-04-05 12:25:41.617516 | orchestrator | 2025-04-05 12:25:41.617531 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-04-05 12:25:41.617546 | orchestrator | Saturday 05 April 2025 12:22:19 +0000 (0:00:01.077) 0:00:26.666 ******** 2025-04-05 12:25:41.617561 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:25:41.617576 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:25:41.617591 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:25:41.617606 | orchestrator | 2025-04-05 12:25:41.617621 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-04-05 12:25:41.617637 | orchestrator | Saturday 05 April 2025 12:22:20 +0000 (0:00:01.099) 0:00:27.766 ******** 2025-04-05 12:25:41.617652 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:25:41.617666 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:25:41.617682 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:25:41.617696 | orchestrator | 2025-04-05 12:25:41.617711 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-04-05 12:25:41.617726 | orchestrator | Saturday 05 April 2025 12:22:22 +0000 (0:00:01.313) 0:00:29.080 ******** 2025-04-05 12:25:41.617741 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:25:41.617757 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:25:41.617826 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:25:41.617841 | orchestrator | 2025-04-05 12:25:41.617855 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-04-05 12:25:41.617870 | orchestrator | Saturday 05 April 2025 12:22:23 +0000 (0:00:01.125) 0:00:30.205 ******** 2025-04-05 12:25:41.617883 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:25:41.617897 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:25:41.617911 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:25:41.617925 | orchestrator | 2025-04-05 12:25:41.617939 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-04-05 12:25:41.617953 | orchestrator | Saturday 05 April 2025 12:22:24 +0000 (0:00:00.737) 0:00:30.942 ******** 2025-04-05 12:25:41.617967 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.617981 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.618002 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.618194 | orchestrator | 2025-04-05 12:25:41.618219 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-04-05 12:25:41.618346 | orchestrator | Saturday 05 April 2025 12:22:24 +0000 (0:00:00.337) 0:00:31.280 ******** 2025-04-05 12:25:41.618362 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:25:41.618375 | orchestrator | 2025-04-05 12:25:41.618388 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-04-05 12:25:41.618401 | orchestrator | Saturday 05 April 2025 12:22:25 +0000 (0:00:00.774) 0:00:32.055 ******** 2025-04-05 12:25:41.618413 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:25:41.618426 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:25:41.618438 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:25:41.618451 | orchestrator | 2025-04-05 12:25:41.618463 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-04-05 12:25:41.618476 | orchestrator | Saturday 05 April 2025 12:22:27 +0000 (0:00:02.076) 0:00:34.132 ******** 2025-04-05 12:25:41.618488 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.618501 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.618513 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:41.618526 | orchestrator | 2025-04-05 12:25:41.618538 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-04-05 12:25:41.618551 | orchestrator | Saturday 05 April 2025 12:22:28 +0000 (0:00:00.675) 0:00:34.807 ******** 2025-04-05 12:25:41.618563 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.618576 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.618588 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:41.618601 | orchestrator | 2025-04-05 12:25:41.618613 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-04-05 12:25:41.618626 | orchestrator | Saturday 05 April 2025 12:22:28 +0000 (0:00:00.737) 0:00:35.544 ******** 2025-04-05 12:25:41.618638 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.618651 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.618663 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:41.618676 | orchestrator | 2025-04-05 12:25:41.618688 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-04-05 12:25:41.618700 | orchestrator | Saturday 05 April 2025 12:22:30 +0000 (0:00:01.692) 0:00:37.237 ******** 2025-04-05 12:25:41.618713 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.618725 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.618738 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.618750 | orchestrator | 2025-04-05 12:25:41.618779 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-04-05 12:25:41.618793 | orchestrator | Saturday 05 April 2025 12:22:30 +0000 (0:00:00.497) 0:00:37.735 ******** 2025-04-05 12:25:41.618805 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.618818 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.618830 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.618843 | orchestrator | 2025-04-05 12:25:41.618855 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-04-05 12:25:41.618868 | orchestrator | Saturday 05 April 2025 12:22:31 +0000 (0:00:00.388) 0:00:38.123 ******** 2025-04-05 12:25:41.618880 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:41.618897 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:25:41.618910 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:25:41.618922 | orchestrator | 2025-04-05 12:25:41.618935 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-04-05 12:25:41.618960 | orchestrator | Saturday 05 April 2025 12:22:33 +0000 (0:00:01.656) 0:00:39.779 ******** 2025-04-05 12:25:41.618974 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-04-05 12:25:41.618987 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-04-05 12:25:41.619009 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-04-05 12:25:41.619022 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-04-05 12:25:41.619035 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-04-05 12:25:41.619047 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-04-05 12:25:41.619060 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-04-05 12:25:41.619072 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-04-05 12:25:41.619084 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-04-05 12:25:41.619096 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-04-05 12:25:41.619114 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-04-05 12:25:41.619126 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-04-05 12:25:41.619139 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-04-05 12:25:41.619151 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-04-05 12:25:41.619163 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-04-05 12:25:41.619175 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:25:41.619188 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:25:41.619200 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:25:41.619212 | orchestrator | 2025-04-05 12:25:41.619225 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-04-05 12:25:41.619237 | orchestrator | Saturday 05 April 2025 12:23:27 +0000 (0:00:54.297) 0:01:34.076 ******** 2025-04-05 12:25:41.619250 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.619263 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.619275 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.619287 | orchestrator | 2025-04-05 12:25:41.619300 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-04-05 12:25:41.619312 | orchestrator | Saturday 05 April 2025 12:23:27 +0000 (0:00:00.280) 0:01:34.357 ******** 2025-04-05 12:25:41.619324 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:41.619337 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:25:41.619349 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:25:41.619361 | orchestrator | 2025-04-05 12:25:41.619374 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-04-05 12:25:41.619386 | orchestrator | Saturday 05 April 2025 12:23:28 +0000 (0:00:00.870) 0:01:35.228 ******** 2025-04-05 12:25:41.619399 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:41.619411 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:25:41.619423 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:25:41.619436 | orchestrator | 2025-04-05 12:25:41.619448 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-04-05 12:25:41.619461 | orchestrator | Saturday 05 April 2025 12:23:29 +0000 (0:00:01.021) 0:01:36.250 ******** 2025-04-05 12:25:41.619479 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:25:41.619492 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:25:41.619504 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:41.619517 | orchestrator | 2025-04-05 12:25:41.619529 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-04-05 12:25:41.619542 | orchestrator | Saturday 05 April 2025 12:23:44 +0000 (0:00:14.867) 0:01:51.118 ******** 2025-04-05 12:25:41.619554 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:25:41.619567 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:25:41.619579 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:25:41.619591 | orchestrator | 2025-04-05 12:25:41.619604 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-04-05 12:25:41.619616 | orchestrator | Saturday 05 April 2025 12:23:44 +0000 (0:00:00.579) 0:01:51.697 ******** 2025-04-05 12:25:41.619629 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:25:41.619641 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:25:41.619653 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:25:41.619666 | orchestrator | 2025-04-05 12:25:41.619679 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-04-05 12:25:41.619691 | orchestrator | Saturday 05 April 2025 12:23:45 +0000 (0:00:00.544) 0:01:52.242 ******** 2025-04-05 12:25:41.619708 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:41.619721 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:25:41.619733 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:25:41.619746 | orchestrator | 2025-04-05 12:25:41.619758 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-04-05 12:25:41.619785 | orchestrator | Saturday 05 April 2025 12:23:46 +0000 (0:00:00.695) 0:01:52.938 ******** 2025-04-05 12:25:41.619798 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:25:41.619811 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:25:41.619823 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:25:41.619835 | orchestrator | 2025-04-05 12:25:41.619848 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-04-05 12:25:41.619860 | orchestrator | Saturday 05 April 2025 12:23:47 +0000 (0:00:01.094) 0:01:54.033 ******** 2025-04-05 12:25:41.619873 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:25:41.619885 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:25:41.619897 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:25:41.619910 | orchestrator | 2025-04-05 12:25:41.619922 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-04-05 12:25:41.619935 | orchestrator | Saturday 05 April 2025 12:23:47 +0000 (0:00:00.370) 0:01:54.403 ******** 2025-04-05 12:25:41.619947 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:41.619960 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:25:41.619972 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:25:41.619985 | orchestrator | 2025-04-05 12:25:41.619997 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-04-05 12:25:41.620009 | orchestrator | Saturday 05 April 2025 12:23:48 +0000 (0:00:00.591) 0:01:54.995 ******** 2025-04-05 12:25:41.620021 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:41.620033 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:25:41.620046 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:25:41.620058 | orchestrator | 2025-04-05 12:25:41.620070 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-04-05 12:25:41.620082 | orchestrator | Saturday 05 April 2025 12:23:48 +0000 (0:00:00.558) 0:01:55.553 ******** 2025-04-05 12:25:41.620095 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:41.620107 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:25:41.620119 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:25:41.620131 | orchestrator | 2025-04-05 12:25:41.620143 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-04-05 12:25:41.620156 | orchestrator | Saturday 05 April 2025 12:23:49 +0000 (0:00:00.892) 0:01:56.445 ******** 2025-04-05 12:25:41.620168 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:25:41.620187 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:25:41.620200 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:25:41.620212 | orchestrator | 2025-04-05 12:25:41.620225 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-04-05 12:25:41.620237 | orchestrator | Saturday 05 April 2025 12:23:50 +0000 (0:00:00.643) 0:01:57.089 ******** 2025-04-05 12:25:41.620249 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.620261 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.620273 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.620285 | orchestrator | 2025-04-05 12:25:41.620297 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-04-05 12:25:41.620310 | orchestrator | Saturday 05 April 2025 12:23:50 +0000 (0:00:00.238) 0:01:57.328 ******** 2025-04-05 12:25:41.620322 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.620335 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.620347 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.620359 | orchestrator | 2025-04-05 12:25:41.620372 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-04-05 12:25:41.620389 | orchestrator | Saturday 05 April 2025 12:23:50 +0000 (0:00:00.243) 0:01:57.571 ******** 2025-04-05 12:25:41.620401 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:25:41.620415 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:25:41.620434 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:25:41.620448 | orchestrator | 2025-04-05 12:25:41.620461 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-04-05 12:25:41.620473 | orchestrator | Saturday 05 April 2025 12:23:51 +0000 (0:00:00.749) 0:01:58.320 ******** 2025-04-05 12:25:41.620486 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:25:41.620499 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:25:41.620511 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:25:41.620523 | orchestrator | 2025-04-05 12:25:41.620536 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-04-05 12:25:41.620549 | orchestrator | Saturday 05 April 2025 12:23:52 +0000 (0:00:00.518) 0:01:58.838 ******** 2025-04-05 12:25:41.620561 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-04-05 12:25:41.620574 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-04-05 12:25:41.620586 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-04-05 12:25:41.620598 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-04-05 12:25:41.620611 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-04-05 12:25:41.620624 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-04-05 12:25:41.620637 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-04-05 12:25:41.620649 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-04-05 12:25:41.620661 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-04-05 12:25:41.620673 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-04-05 12:25:41.620686 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-04-05 12:25:41.620704 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-04-05 12:25:41.620722 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-04-05 12:25:41.620735 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-04-05 12:25:41.620748 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-04-05 12:25:41.620780 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-04-05 12:25:41.620793 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-04-05 12:25:41.620805 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-04-05 12:25:41.620818 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-04-05 12:25:41.620831 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-04-05 12:25:41.620843 | orchestrator | 2025-04-05 12:25:41.620856 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-04-05 12:25:41.620868 | orchestrator | 2025-04-05 12:25:41.620880 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-04-05 12:25:41.620893 | orchestrator | Saturday 05 April 2025 12:23:54 +0000 (0:00:02.677) 0:02:01.516 ******** 2025-04-05 12:25:41.620905 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:25:41.620918 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:25:41.620930 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:25:41.620943 | orchestrator | 2025-04-05 12:25:41.620955 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-04-05 12:25:41.620967 | orchestrator | Saturday 05 April 2025 12:23:55 +0000 (0:00:00.422) 0:02:01.939 ******** 2025-04-05 12:25:41.620980 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:25:41.620992 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:25:41.621004 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:25:41.621017 | orchestrator | 2025-04-05 12:25:41.621029 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-04-05 12:25:41.621042 | orchestrator | Saturday 05 April 2025 12:23:55 +0000 (0:00:00.654) 0:02:02.593 ******** 2025-04-05 12:25:41.621054 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:25:41.621067 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:25:41.621079 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:25:41.621091 | orchestrator | 2025-04-05 12:25:41.621103 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-04-05 12:25:41.621116 | orchestrator | Saturday 05 April 2025 12:23:56 +0000 (0:00:00.394) 0:02:02.988 ******** 2025-04-05 12:25:41.621128 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:25:41.621140 | orchestrator | 2025-04-05 12:25:41.621152 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-04-05 12:25:41.621165 | orchestrator | Saturday 05 April 2025 12:23:56 +0000 (0:00:00.743) 0:02:03.731 ******** 2025-04-05 12:25:41.621177 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:41.621189 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:41.621201 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:41.621214 | orchestrator | 2025-04-05 12:25:41.621226 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-04-05 12:25:41.621238 | orchestrator | Saturday 05 April 2025 12:23:57 +0000 (0:00:00.373) 0:02:04.104 ******** 2025-04-05 12:25:41.621250 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:41.621263 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:41.621275 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:41.621287 | orchestrator | 2025-04-05 12:25:41.621299 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-04-05 12:25:41.621312 | orchestrator | Saturday 05 April 2025 12:23:57 +0000 (0:00:00.336) 0:02:04.441 ******** 2025-04-05 12:25:41.621324 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:41.621336 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:41.621348 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:41.621360 | orchestrator | 2025-04-05 12:25:41.621372 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-04-05 12:25:41.621384 | orchestrator | Saturday 05 April 2025 12:23:58 +0000 (0:00:00.395) 0:02:04.836 ******** 2025-04-05 12:25:41.621406 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:25:41.621419 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:25:41.621431 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:25:41.621443 | orchestrator | 2025-04-05 12:25:41.621456 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-04-05 12:25:41.621468 | orchestrator | Saturday 05 April 2025 12:23:59 +0000 (0:00:01.640) 0:02:06.477 ******** 2025-04-05 12:25:41.621481 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:25:41.621493 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:25:41.621505 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:25:41.621518 | orchestrator | 2025-04-05 12:25:41.621530 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-04-05 12:25:41.621542 | orchestrator | 2025-04-05 12:25:41.621555 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-04-05 12:25:41.621567 | orchestrator | Saturday 05 April 2025 12:24:09 +0000 (0:00:10.043) 0:02:16.520 ******** 2025-04-05 12:25:41.621579 | orchestrator | ok: [testbed-manager] 2025-04-05 12:25:41.621592 | orchestrator | 2025-04-05 12:25:41.621604 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-04-05 12:25:41.621617 | orchestrator | Saturday 05 April 2025 12:24:10 +0000 (0:00:00.656) 0:02:17.177 ******** 2025-04-05 12:25:41.621629 | orchestrator | changed: [testbed-manager] 2025-04-05 12:25:41.621642 | orchestrator | 2025-04-05 12:25:41.621659 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-04-05 12:25:41.621672 | orchestrator | Saturday 05 April 2025 12:24:10 +0000 (0:00:00.395) 0:02:17.572 ******** 2025-04-05 12:25:41.621690 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-04-05 12:25:41.621702 | orchestrator | 2025-04-05 12:25:41.621715 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-04-05 12:25:41.621728 | orchestrator | Saturday 05 April 2025 12:24:11 +0000 (0:00:00.795) 0:02:18.367 ******** 2025-04-05 12:25:41.621740 | orchestrator | changed: [testbed-manager] 2025-04-05 12:25:41.621753 | orchestrator | 2025-04-05 12:25:41.621813 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-04-05 12:25:41.621828 | orchestrator | Saturday 05 April 2025 12:24:12 +0000 (0:00:00.748) 0:02:19.115 ******** 2025-04-05 12:25:41.621841 | orchestrator | changed: [testbed-manager] 2025-04-05 12:25:41.621852 | orchestrator | 2025-04-05 12:25:41.621863 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-04-05 12:25:41.621873 | orchestrator | Saturday 05 April 2025 12:24:12 +0000 (0:00:00.461) 0:02:19.577 ******** 2025-04-05 12:25:41.621883 | orchestrator | changed: [testbed-manager -> localhost] 2025-04-05 12:25:41.621894 | orchestrator | 2025-04-05 12:25:41.621904 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-04-05 12:25:41.621914 | orchestrator | Saturday 05 April 2025 12:24:14 +0000 (0:00:01.390) 0:02:20.968 ******** 2025-04-05 12:25:41.621924 | orchestrator | changed: [testbed-manager -> localhost] 2025-04-05 12:25:41.621935 | orchestrator | 2025-04-05 12:25:41.621945 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-04-05 12:25:41.621955 | orchestrator | Saturday 05 April 2025 12:24:14 +0000 (0:00:00.711) 0:02:21.680 ******** 2025-04-05 12:25:41.621965 | orchestrator | changed: [testbed-manager] 2025-04-05 12:25:41.621975 | orchestrator | 2025-04-05 12:25:41.621985 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-04-05 12:25:41.621996 | orchestrator | Saturday 05 April 2025 12:24:15 +0000 (0:00:00.328) 0:02:22.009 ******** 2025-04-05 12:25:41.622006 | orchestrator | changed: [testbed-manager] 2025-04-05 12:25:41.622046 | orchestrator | 2025-04-05 12:25:41.622059 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-04-05 12:25:41.622069 | orchestrator | 2025-04-05 12:25:41.622080 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-04-05 12:25:41.622090 | orchestrator | Saturday 05 April 2025 12:24:15 +0000 (0:00:00.388) 0:02:22.398 ******** 2025-04-05 12:25:41.622107 | orchestrator | ok: [testbed-manager] 2025-04-05 12:25:41.622117 | orchestrator | 2025-04-05 12:25:41.622128 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-04-05 12:25:41.622138 | orchestrator | Saturday 05 April 2025 12:24:15 +0000 (0:00:00.124) 0:02:22.522 ******** 2025-04-05 12:25:41.622148 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-04-05 12:25:41.622158 | orchestrator | 2025-04-05 12:25:41.622168 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-04-05 12:25:41.622179 | orchestrator | Saturday 05 April 2025 12:24:15 +0000 (0:00:00.221) 0:02:22.743 ******** 2025-04-05 12:25:41.622189 | orchestrator | ok: [testbed-manager] 2025-04-05 12:25:41.622199 | orchestrator | 2025-04-05 12:25:41.622209 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-04-05 12:25:41.622219 | orchestrator | Saturday 05 April 2025 12:24:17 +0000 (0:00:01.067) 0:02:23.810 ******** 2025-04-05 12:25:41.622229 | orchestrator | ok: [testbed-manager] 2025-04-05 12:25:41.622239 | orchestrator | 2025-04-05 12:25:41.622249 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-04-05 12:25:41.622259 | orchestrator | Saturday 05 April 2025 12:24:18 +0000 (0:00:01.203) 0:02:25.014 ******** 2025-04-05 12:25:41.622269 | orchestrator | changed: [testbed-manager] 2025-04-05 12:25:41.622279 | orchestrator | 2025-04-05 12:25:41.622289 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-04-05 12:25:41.622299 | orchestrator | Saturday 05 April 2025 12:24:18 +0000 (0:00:00.707) 0:02:25.721 ******** 2025-04-05 12:25:41.622309 | orchestrator | ok: [testbed-manager] 2025-04-05 12:25:41.622319 | orchestrator | 2025-04-05 12:25:41.622329 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-04-05 12:25:41.622339 | orchestrator | Saturday 05 April 2025 12:24:19 +0000 (0:00:00.424) 0:02:26.146 ******** 2025-04-05 12:25:41.622349 | orchestrator | changed: [testbed-manager] 2025-04-05 12:25:41.622359 | orchestrator | 2025-04-05 12:25:41.622369 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-04-05 12:25:41.622379 | orchestrator | Saturday 05 April 2025 12:24:24 +0000 (0:00:05.239) 0:02:31.385 ******** 2025-04-05 12:25:41.622389 | orchestrator | changed: [testbed-manager] 2025-04-05 12:25:41.622399 | orchestrator | 2025-04-05 12:25:41.622413 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-04-05 12:25:41.622423 | orchestrator | Saturday 05 April 2025 12:24:34 +0000 (0:00:10.192) 0:02:41.578 ******** 2025-04-05 12:25:41.622433 | orchestrator | ok: [testbed-manager] 2025-04-05 12:25:41.622443 | orchestrator | 2025-04-05 12:25:41.622453 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-04-05 12:25:41.622464 | orchestrator | 2025-04-05 12:25:41.622474 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-04-05 12:25:41.622484 | orchestrator | Saturday 05 April 2025 12:24:35 +0000 (0:00:00.412) 0:02:41.990 ******** 2025-04-05 12:25:41.622494 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:25:41.622504 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:25:41.622514 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:25:41.622524 | orchestrator | 2025-04-05 12:25:41.622534 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-04-05 12:25:41.622544 | orchestrator | Saturday 05 April 2025 12:24:35 +0000 (0:00:00.443) 0:02:42.434 ******** 2025-04-05 12:25:41.622554 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.622564 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.622574 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.622584 | orchestrator | 2025-04-05 12:25:41.622595 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-04-05 12:25:41.622605 | orchestrator | Saturday 05 April 2025 12:24:35 +0000 (0:00:00.255) 0:02:42.690 ******** 2025-04-05 12:25:41.622615 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:25:41.622630 | orchestrator | 2025-04-05 12:25:41.622645 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-04-05 12:25:41.622656 | orchestrator | Saturday 05 April 2025 12:24:36 +0000 (0:00:00.471) 0:02:43.162 ******** 2025-04-05 12:25:41.622666 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-04-05 12:25:41.622676 | orchestrator | 2025-04-05 12:25:41.622686 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-04-05 12:25:41.622696 | orchestrator | Saturday 05 April 2025 12:24:37 +0000 (0:00:00.813) 0:02:43.976 ******** 2025-04-05 12:25:41.622706 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-05 12:25:41.622716 | orchestrator | 2025-04-05 12:25:41.622727 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-04-05 12:25:41.622737 | orchestrator | Saturday 05 April 2025 12:24:37 +0000 (0:00:00.713) 0:02:44.689 ******** 2025-04-05 12:25:41.622747 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.622761 | orchestrator | 2025-04-05 12:25:41.622786 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-04-05 12:25:41.622796 | orchestrator | Saturday 05 April 2025 12:24:38 +0000 (0:00:00.565) 0:02:45.254 ******** 2025-04-05 12:25:41.622806 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-05 12:25:41.622816 | orchestrator | 2025-04-05 12:25:41.622827 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-04-05 12:25:41.622837 | orchestrator | Saturday 05 April 2025 12:24:39 +0000 (0:00:00.966) 0:02:46.221 ******** 2025-04-05 12:25:41.622847 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.622857 | orchestrator | 2025-04-05 12:25:41.622867 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-04-05 12:25:41.622877 | orchestrator | Saturday 05 April 2025 12:24:39 +0000 (0:00:00.209) 0:02:46.431 ******** 2025-04-05 12:25:41.622887 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.622898 | orchestrator | 2025-04-05 12:25:41.622907 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-04-05 12:25:41.622918 | orchestrator | Saturday 05 April 2025 12:24:39 +0000 (0:00:00.214) 0:02:46.646 ******** 2025-04-05 12:25:41.622928 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.622938 | orchestrator | 2025-04-05 12:25:41.622948 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-04-05 12:25:41.622958 | orchestrator | Saturday 05 April 2025 12:24:40 +0000 (0:00:00.180) 0:02:46.826 ******** 2025-04-05 12:25:41.622968 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.622978 | orchestrator | 2025-04-05 12:25:41.622988 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-04-05 12:25:41.622999 | orchestrator | Saturday 05 April 2025 12:24:40 +0000 (0:00:00.178) 0:02:47.004 ******** 2025-04-05 12:25:41.623009 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-04-05 12:25:41.623019 | orchestrator | 2025-04-05 12:25:41.623029 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-04-05 12:25:41.623039 | orchestrator | Saturday 05 April 2025 12:24:44 +0000 (0:00:04.046) 0:02:51.051 ******** 2025-04-05 12:25:41.623049 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-04-05 12:25:41.623059 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-04-05 12:25:41.623069 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-04-05 12:25:41.623079 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-04-05 12:25:41.623089 | orchestrator | 2025-04-05 12:25:41.623099 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-04-05 12:25:41.623109 | orchestrator | Saturday 05 April 2025 12:25:14 +0000 (0:00:30.181) 0:03:21.232 ******** 2025-04-05 12:25:41.623119 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-05 12:25:41.623129 | orchestrator | 2025-04-05 12:25:41.623140 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-04-05 12:25:41.623154 | orchestrator | Saturday 05 April 2025 12:25:15 +0000 (0:00:01.196) 0:03:22.429 ******** 2025-04-05 12:25:41.623170 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-04-05 12:25:41.623180 | orchestrator | 2025-04-05 12:25:41.623190 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-04-05 12:25:41.623200 | orchestrator | Saturday 05 April 2025 12:25:17 +0000 (0:00:01.384) 0:03:23.814 ******** 2025-04-05 12:25:41.623210 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-04-05 12:25:41.623220 | orchestrator | 2025-04-05 12:25:41.623230 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-04-05 12:25:41.623240 | orchestrator | Saturday 05 April 2025 12:25:18 +0000 (0:00:01.028) 0:03:24.842 ******** 2025-04-05 12:25:41.623250 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.623260 | orchestrator | 2025-04-05 12:25:41.623270 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-04-05 12:25:41.623280 | orchestrator | Saturday 05 April 2025 12:25:18 +0000 (0:00:00.233) 0:03:25.075 ******** 2025-04-05 12:25:41.623290 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-04-05 12:25:41.623301 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-04-05 12:25:41.623311 | orchestrator | 2025-04-05 12:25:41.623321 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-04-05 12:25:41.623331 | orchestrator | Saturday 05 April 2025 12:25:20 +0000 (0:00:02.174) 0:03:27.250 ******** 2025-04-05 12:25:41.623341 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:41.623351 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:41.623361 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:41.623371 | orchestrator | 2025-04-05 12:25:41.623381 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-04-05 12:25:41.623392 | orchestrator | Saturday 05 April 2025 12:25:20 +0000 (0:00:00.221) 0:03:27.472 ******** 2025-04-05 12:25:41.623402 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:25:41.623416 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:25:41.623426 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:25:41.623437 | orchestrator | 2025-04-05 12:25:41.623451 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-04-05 12:25:41.623462 | orchestrator | 2025-04-05 12:25:41.623473 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-04-05 12:25:41.623483 | orchestrator | Saturday 05 April 2025 12:25:21 +0000 (0:00:00.729) 0:03:28.202 ******** 2025-04-05 12:25:41.623493 | orchestrator | ok: [testbed-manager] 2025-04-05 12:25:41.623503 | orchestrator | 2025-04-05 12:25:41.623513 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-04-05 12:25:41.623523 | orchestrator | Saturday 05 April 2025 12:25:21 +0000 (0:00:00.096) 0:03:28.298 ******** 2025-04-05 12:25:41.623533 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-04-05 12:25:41.623543 | orchestrator | 2025-04-05 12:25:41.623554 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-04-05 12:25:41.623563 | orchestrator | Saturday 05 April 2025 12:25:21 +0000 (0:00:00.312) 0:03:28.610 ******** 2025-04-05 12:25:41.623574 | orchestrator | changed: [testbed-manager] 2025-04-05 12:25:41.623584 | orchestrator | 2025-04-05 12:25:41.623594 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-04-05 12:25:41.623604 | orchestrator | 2025-04-05 12:25:41.623614 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-04-05 12:25:41.623624 | orchestrator | Saturday 05 April 2025 12:25:26 +0000 (0:00:04.972) 0:03:33.583 ******** 2025-04-05 12:25:41.623634 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:25:41.623644 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:25:41.623655 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:25:41.623665 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:25:41.623675 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:25:41.623685 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:25:41.623695 | orchestrator | 2025-04-05 12:25:41.623710 | orchestrator | TASK [Manage labels] *********************************************************** 2025-04-05 12:25:41.623720 | orchestrator | Saturday 05 April 2025 12:25:27 +0000 (0:00:00.550) 0:03:34.133 ******** 2025-04-05 12:25:41.623730 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-04-05 12:25:41.623740 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-04-05 12:25:41.623750 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-04-05 12:25:41.623760 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-04-05 12:25:41.623784 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-04-05 12:25:41.623794 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-04-05 12:25:41.623804 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-04-05 12:25:41.623814 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-04-05 12:25:41.623828 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-04-05 12:25:41.623839 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-04-05 12:25:41.623849 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-04-05 12:25:41.623859 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-04-05 12:25:41.623869 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-04-05 12:25:41.623879 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-04-05 12:25:41.623889 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-04-05 12:25:41.623899 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-04-05 12:25:41.623909 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-04-05 12:25:41.623919 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-04-05 12:25:41.623929 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-04-05 12:25:41.623939 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-04-05 12:25:41.623949 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-04-05 12:25:41.623959 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-04-05 12:25:41.623969 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-04-05 12:25:41.623979 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-04-05 12:25:41.623989 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-04-05 12:25:41.623999 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-04-05 12:25:41.624009 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-04-05 12:25:41.624019 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-04-05 12:25:41.624029 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-04-05 12:25:41.624039 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-04-05 12:25:41.624049 | orchestrator | 2025-04-05 12:25:41.624063 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-04-05 12:25:44.647999 | orchestrator | Saturday 05 April 2025 12:25:38 +0000 (0:00:11.535) 0:03:45.669 ******** 2025-04-05 12:25:44.648121 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:44.648166 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:44.648180 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:44.648193 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:44.648205 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:44.648218 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:44.648230 | orchestrator | 2025-04-05 12:25:44.648243 | orchestrator | TASK [Manage taints] *********************************************************** 2025-04-05 12:25:44.648256 | orchestrator | Saturday 05 April 2025 12:25:39 +0000 (0:00:00.450) 0:03:46.119 ******** 2025-04-05 12:25:44.648278 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:25:44.648291 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:25:44.648304 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:25:44.648317 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:25:44.648330 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:25:44.648342 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:25:44.648355 | orchestrator | 2025-04-05 12:25:44.648368 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:25:44.648380 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:25:44.648396 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-04-05 12:25:44.648409 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-04-05 12:25:44.648422 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-04-05 12:25:44.648434 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-04-05 12:25:44.648447 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-04-05 12:25:44.648459 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-04-05 12:25:44.648472 | orchestrator | 2025-04-05 12:25:44.648484 | orchestrator | 2025-04-05 12:25:44.648496 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:25:44.648509 | orchestrator | Saturday 05 April 2025 12:25:39 +0000 (0:00:00.446) 0:03:46.566 ******** 2025-04-05 12:25:44.648521 | orchestrator | =============================================================================== 2025-04-05 12:25:44.648536 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.30s 2025-04-05 12:25:44.648551 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 30.18s 2025-04-05 12:25:44.648566 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 14.87s 2025-04-05 12:25:44.648596 | orchestrator | Manage labels ---------------------------------------------------------- 11.54s 2025-04-05 12:25:44.648611 | orchestrator | kubectl : Install required packages ------------------------------------ 10.19s 2025-04-05 12:25:44.648627 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.04s 2025-04-05 12:25:44.648641 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 7.26s 2025-04-05 12:25:44.648656 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 5.24s 2025-04-05 12:25:44.648670 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.97s 2025-04-05 12:25:44.648685 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.05s 2025-04-05 12:25:44.648700 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.68s 2025-04-05 12:25:44.648723 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.40s 2025-04-05 12:25:44.648738 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.26s 2025-04-05 12:25:44.648752 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.17s 2025-04-05 12:25:44.648788 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.08s 2025-04-05 12:25:44.648803 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.07s 2025-04-05 12:25:44.648817 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.69s 2025-04-05 12:25:44.648831 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.66s 2025-04-05 12:25:44.648845 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.64s 2025-04-05 12:25:44.648859 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.63s 2025-04-05 12:25:44.648874 | orchestrator | 2025-04-05 12:25:41 | INFO  | Task 294926e3-e30e-4f1f-b343-db06b84207d9 is in state STARTED 2025-04-05 12:25:44.648888 | orchestrator | 2025-04-05 12:25:41 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:44.648917 | orchestrator | 2025-04-05 12:25:44 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:44.649359 | orchestrator | 2025-04-05 12:25:44 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:44.649455 | orchestrator | 2025-04-05 12:25:44 | INFO  | Task 8bffd860-fb0a-40cd-a7bf-435615552957 is in state STARTED 2025-04-05 12:25:44.653123 | orchestrator | 2025-04-05 12:25:44 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:44.653316 | orchestrator | 2025-04-05 12:25:44 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:25:44.654180 | orchestrator | 2025-04-05 12:25:44 | INFO  | Task 294926e3-e30e-4f1f-b343-db06b84207d9 is in state STARTED 2025-04-05 12:25:47.715605 | orchestrator | 2025-04-05 12:25:44 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:47.715748 | orchestrator | 2025-04-05 12:25:47 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:47.716011 | orchestrator | 2025-04-05 12:25:47 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:47.716047 | orchestrator | 2025-04-05 12:25:47 | INFO  | Task 8bffd860-fb0a-40cd-a7bf-435615552957 is in state SUCCESS 2025-04-05 12:25:47.720490 | orchestrator | 2025-04-05 12:25:47 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:47.720939 | orchestrator | 2025-04-05 12:25:47 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:25:47.724901 | orchestrator | 2025-04-05 12:25:47 | INFO  | Task 294926e3-e30e-4f1f-b343-db06b84207d9 is in state STARTED 2025-04-05 12:25:50.764696 | orchestrator | 2025-04-05 12:25:47 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:50.764902 | orchestrator | 2025-04-05 12:25:50 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:50.764991 | orchestrator | 2025-04-05 12:25:50 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:50.768362 | orchestrator | 2025-04-05 12:25:50 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:50.768839 | orchestrator | 2025-04-05 12:25:50 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:25:50.770470 | orchestrator | 2025-04-05 12:25:50 | INFO  | Task 294926e3-e30e-4f1f-b343-db06b84207d9 is in state STARTED 2025-04-05 12:25:53.801092 | orchestrator | 2025-04-05 12:25:50 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:53.801218 | orchestrator | 2025-04-05 12:25:53 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:56.835907 | orchestrator | 2025-04-05 12:25:53 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:56.836019 | orchestrator | 2025-04-05 12:25:53 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:56.836039 | orchestrator | 2025-04-05 12:25:53 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:25:56.836054 | orchestrator | 2025-04-05 12:25:53 | INFO  | Task 294926e3-e30e-4f1f-b343-db06b84207d9 is in state SUCCESS 2025-04-05 12:25:56.836069 | orchestrator | 2025-04-05 12:25:53 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:56.836148 | orchestrator | 2025-04-05 12:25:56 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:56.836246 | orchestrator | 2025-04-05 12:25:56 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:56.836267 | orchestrator | 2025-04-05 12:25:56 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:56.836287 | orchestrator | 2025-04-05 12:25:56 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:25:59.875573 | orchestrator | 2025-04-05 12:25:56 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:25:59.875706 | orchestrator | 2025-04-05 12:25:59 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:25:59.878799 | orchestrator | 2025-04-05 12:25:59 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:25:59.879239 | orchestrator | 2025-04-05 12:25:59 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:25:59.879901 | orchestrator | 2025-04-05 12:25:59 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:26:02.909128 | orchestrator | 2025-04-05 12:25:59 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:26:02.909262 | orchestrator | 2025-04-05 12:26:02 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:26:02.909941 | orchestrator | 2025-04-05 12:26:02 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:26:02.909979 | orchestrator | 2025-04-05 12:26:02 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:26:02.913109 | orchestrator | 2025-04-05 12:26:02 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:26:05.940963 | orchestrator | 2025-04-05 12:26:02 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:26:05.941152 | orchestrator | 2025-04-05 12:26:05 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:26:05.941245 | orchestrator | 2025-04-05 12:26:05 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:26:05.941896 | orchestrator | 2025-04-05 12:26:05 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:26:05.942604 | orchestrator | 2025-04-05 12:26:05 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:26:05.942720 | orchestrator | 2025-04-05 12:26:05 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:26:08.971215 | orchestrator | 2025-04-05 12:26:08 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:26:08.972541 | orchestrator | 2025-04-05 12:26:08 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:26:08.974105 | orchestrator | 2025-04-05 12:26:08 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:26:08.979145 | orchestrator | 2025-04-05 12:26:08 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:26:08.979622 | orchestrator | 2025-04-05 12:26:08 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:26:12.017249 | orchestrator | 2025-04-05 12:26:12 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:26:12.017457 | orchestrator | 2025-04-05 12:26:12 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:26:12.017487 | orchestrator | 2025-04-05 12:26:12 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:26:12.018168 | orchestrator | 2025-04-05 12:26:12 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:26:15.055553 | orchestrator | 2025-04-05 12:26:12 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:26:15.055698 | orchestrator | 2025-04-05 12:26:15 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:26:15.055875 | orchestrator | 2025-04-05 12:26:15 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:26:15.055960 | orchestrator | 2025-04-05 12:26:15 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:26:15.056434 | orchestrator | 2025-04-05 12:26:15 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:26:18.100489 | orchestrator | 2025-04-05 12:26:15 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:26:18.100617 | orchestrator | 2025-04-05 12:26:18 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state STARTED 2025-04-05 12:26:18.102358 | orchestrator | 2025-04-05 12:26:18 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:26:18.104122 | orchestrator | 2025-04-05 12:26:18 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:26:18.106096 | orchestrator | 2025-04-05 12:26:18 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:26:18.106603 | orchestrator | 2025-04-05 12:26:18 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:26:21.145944 | orchestrator | 2025-04-05 12:26:21 | INFO  | Task f61c1535-db41-4def-9b55-154923cffc65 is in state SUCCESS 2025-04-05 12:26:21.147525 | orchestrator | 2025-04-05 12:26:21.147573 | orchestrator | 2025-04-05 12:26:21.147591 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-04-05 12:26:21.147607 | orchestrator | 2025-04-05 12:26:21.147622 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-04-05 12:26:21.147638 | orchestrator | Saturday 05 April 2025 12:25:43 +0000 (0:00:00.130) 0:00:00.130 ******** 2025-04-05 12:26:21.147654 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-04-05 12:26:21.147669 | orchestrator | 2025-04-05 12:26:21.147684 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-04-05 12:26:21.147699 | orchestrator | Saturday 05 April 2025 12:25:44 +0000 (0:00:00.656) 0:00:00.787 ******** 2025-04-05 12:26:21.147714 | orchestrator | changed: [testbed-manager] 2025-04-05 12:26:21.147805 | orchestrator | 2025-04-05 12:26:21.147824 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-04-05 12:26:21.147838 | orchestrator | Saturday 05 April 2025 12:25:45 +0000 (0:00:01.018) 0:00:01.805 ******** 2025-04-05 12:26:21.147852 | orchestrator | changed: [testbed-manager] 2025-04-05 12:26:21.147867 | orchestrator | 2025-04-05 12:26:21.147881 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:26:21.147924 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:26:21.147940 | orchestrator | 2025-04-05 12:26:21.147954 | orchestrator | 2025-04-05 12:26:21.147968 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:26:21.147981 | orchestrator | Saturday 05 April 2025 12:25:45 +0000 (0:00:00.349) 0:00:02.155 ******** 2025-04-05 12:26:21.147995 | orchestrator | =============================================================================== 2025-04-05 12:26:21.148009 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.02s 2025-04-05 12:26:21.148023 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.66s 2025-04-05 12:26:21.148037 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.35s 2025-04-05 12:26:21.148050 | orchestrator | 2025-04-05 12:26:21.148064 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.18.4 2025-04-05 12:26:21.148078 | orchestrator | 2025-04-05 12:26:21.148106 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-04-05 12:26:21.148124 | orchestrator | 2025-04-05 12:26:21.148140 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-04-05 12:26:21.148156 | orchestrator | Saturday 05 April 2025 12:25:44 +0000 (0:00:00.131) 0:00:00.131 ******** 2025-04-05 12:26:21.148171 | orchestrator | ok: [testbed-manager] 2025-04-05 12:26:21.148187 | orchestrator | 2025-04-05 12:26:21.148203 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-04-05 12:26:21.148218 | orchestrator | Saturday 05 April 2025 12:25:44 +0000 (0:00:00.605) 0:00:00.736 ******** 2025-04-05 12:26:21.148234 | orchestrator | ok: [testbed-manager] 2025-04-05 12:26:21.148247 | orchestrator | 2025-04-05 12:26:21.148261 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-04-05 12:26:21.148275 | orchestrator | Saturday 05 April 2025 12:25:45 +0000 (0:00:00.528) 0:00:01.264 ******** 2025-04-05 12:26:21.148289 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-04-05 12:26:21.148303 | orchestrator | 2025-04-05 12:26:21.148316 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-04-05 12:26:21.148330 | orchestrator | Saturday 05 April 2025 12:25:45 +0000 (0:00:00.643) 0:00:01.908 ******** 2025-04-05 12:26:21.148344 | orchestrator | changed: [testbed-manager] 2025-04-05 12:26:21.148358 | orchestrator | 2025-04-05 12:26:21.148372 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-04-05 12:26:21.148385 | orchestrator | Saturday 05 April 2025 12:25:47 +0000 (0:00:01.194) 0:00:03.103 ******** 2025-04-05 12:26:21.148399 | orchestrator | changed: [testbed-manager] 2025-04-05 12:26:21.148413 | orchestrator | 2025-04-05 12:26:21.148427 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-04-05 12:26:21.148441 | orchestrator | Saturday 05 April 2025 12:25:48 +0000 (0:00:00.835) 0:00:03.938 ******** 2025-04-05 12:26:21.148454 | orchestrator | changed: [testbed-manager -> localhost] 2025-04-05 12:26:21.148468 | orchestrator | 2025-04-05 12:26:21.148482 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-04-05 12:26:21.148495 | orchestrator | Saturday 05 April 2025 12:25:49 +0000 (0:00:01.811) 0:00:05.750 ******** 2025-04-05 12:26:21.148509 | orchestrator | changed: [testbed-manager -> localhost] 2025-04-05 12:26:21.148523 | orchestrator | 2025-04-05 12:26:21.148537 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-04-05 12:26:21.148550 | orchestrator | Saturday 05 April 2025 12:25:50 +0000 (0:00:00.984) 0:00:06.735 ******** 2025-04-05 12:26:21.148564 | orchestrator | ok: [testbed-manager] 2025-04-05 12:26:21.148578 | orchestrator | 2025-04-05 12:26:21.148592 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-04-05 12:26:21.148606 | orchestrator | Saturday 05 April 2025 12:25:51 +0000 (0:00:00.380) 0:00:07.115 ******** 2025-04-05 12:26:21.148620 | orchestrator | ok: [testbed-manager] 2025-04-05 12:26:21.148642 | orchestrator | 2025-04-05 12:26:21.148656 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:26:21.148670 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:26:21.148684 | orchestrator | 2025-04-05 12:26:21.148698 | orchestrator | 2025-04-05 12:26:21.148712 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:26:21.148726 | orchestrator | Saturday 05 April 2025 12:25:51 +0000 (0:00:00.252) 0:00:07.368 ******** 2025-04-05 12:26:21.148739 | orchestrator | =============================================================================== 2025-04-05 12:26:21.148753 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.81s 2025-04-05 12:26:21.148786 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.19s 2025-04-05 12:26:21.148811 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.98s 2025-04-05 12:26:21.148827 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.84s 2025-04-05 12:26:21.148841 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.64s 2025-04-05 12:26:21.148855 | orchestrator | Get home directory of operator user ------------------------------------- 0.61s 2025-04-05 12:26:21.148868 | orchestrator | Create .kube directory -------------------------------------------------- 0.53s 2025-04-05 12:26:21.148882 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.38s 2025-04-05 12:26:21.148896 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.25s 2025-04-05 12:26:21.148910 | orchestrator | 2025-04-05 12:26:21.148923 | orchestrator | 2025-04-05 12:26:21.148943 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-04-05 12:26:21.148957 | orchestrator | 2025-04-05 12:26:21.148971 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-04-05 12:26:21.148985 | orchestrator | Saturday 05 April 2025 12:24:14 +0000 (0:00:00.071) 0:00:00.071 ******** 2025-04-05 12:26:21.148999 | orchestrator | ok: [localhost] => { 2025-04-05 12:26:21.149014 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-04-05 12:26:21.149029 | orchestrator | } 2025-04-05 12:26:21.149043 | orchestrator | 2025-04-05 12:26:21.149057 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-04-05 12:26:21.149071 | orchestrator | Saturday 05 April 2025 12:24:14 +0000 (0:00:00.111) 0:00:00.183 ******** 2025-04-05 12:26:21.149085 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-04-05 12:26:21.149100 | orchestrator | ...ignoring 2025-04-05 12:26:21.149114 | orchestrator | 2025-04-05 12:26:21.149128 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-04-05 12:26:21.149142 | orchestrator | Saturday 05 April 2025 12:24:17 +0000 (0:00:02.836) 0:00:03.019 ******** 2025-04-05 12:26:21.149156 | orchestrator | skipping: [localhost] 2025-04-05 12:26:21.149169 | orchestrator | 2025-04-05 12:26:21.149183 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-04-05 12:26:21.149197 | orchestrator | Saturday 05 April 2025 12:24:17 +0000 (0:00:00.076) 0:00:03.095 ******** 2025-04-05 12:26:21.149211 | orchestrator | ok: [localhost] 2025-04-05 12:26:21.149225 | orchestrator | 2025-04-05 12:26:21.149239 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:26:21.149253 | orchestrator | 2025-04-05 12:26:21.149267 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:26:21.149281 | orchestrator | Saturday 05 April 2025 12:24:17 +0000 (0:00:00.199) 0:00:03.294 ******** 2025-04-05 12:26:21.149295 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:26:21.149309 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:26:21.149323 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:26:21.149337 | orchestrator | 2025-04-05 12:26:21.149358 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:26:21.149373 | orchestrator | Saturday 05 April 2025 12:24:17 +0000 (0:00:00.314) 0:00:03.609 ******** 2025-04-05 12:26:21.149387 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-04-05 12:26:21.149401 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-04-05 12:26:21.149415 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-04-05 12:26:21.149429 | orchestrator | 2025-04-05 12:26:21.149443 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-04-05 12:26:21.149456 | orchestrator | 2025-04-05 12:26:21.149470 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-05 12:26:21.149483 | orchestrator | Saturday 05 April 2025 12:24:18 +0000 (0:00:00.901) 0:00:04.511 ******** 2025-04-05 12:26:21.149497 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:26:21.149511 | orchestrator | 2025-04-05 12:26:21.149525 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-04-05 12:26:21.149539 | orchestrator | Saturday 05 April 2025 12:24:19 +0000 (0:00:00.943) 0:00:05.454 ******** 2025-04-05 12:26:21.149553 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:26:21.149566 | orchestrator | 2025-04-05 12:26:21.149580 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-04-05 12:26:21.149594 | orchestrator | Saturday 05 April 2025 12:24:20 +0000 (0:00:00.983) 0:00:06.438 ******** 2025-04-05 12:26:21.149608 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:26:21.149621 | orchestrator | 2025-04-05 12:26:21.149635 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-04-05 12:26:21.149649 | orchestrator | Saturday 05 April 2025 12:24:21 +0000 (0:00:00.548) 0:00:06.986 ******** 2025-04-05 12:26:21.149662 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:26:21.149676 | orchestrator | 2025-04-05 12:26:21.149690 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-04-05 12:26:21.149704 | orchestrator | Saturday 05 April 2025 12:24:21 +0000 (0:00:00.811) 0:00:07.798 ******** 2025-04-05 12:26:21.149717 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:26:21.149731 | orchestrator | 2025-04-05 12:26:21.149745 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-04-05 12:26:21.149758 | orchestrator | Saturday 05 April 2025 12:24:22 +0000 (0:00:00.702) 0:00:08.500 ******** 2025-04-05 12:26:21.149788 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:26:21.149808 | orchestrator | 2025-04-05 12:26:21.149822 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-05 12:26:21.149837 | orchestrator | Saturday 05 April 2025 12:24:23 +0000 (0:00:00.694) 0:00:09.195 ******** 2025-04-05 12:26:21.149851 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:26:21.149865 | orchestrator | 2025-04-05 12:26:21.149886 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-04-05 12:26:21.149900 | orchestrator | Saturday 05 April 2025 12:24:24 +0000 (0:00:00.847) 0:00:10.042 ******** 2025-04-05 12:26:21.149915 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:26:21.149928 | orchestrator | 2025-04-05 12:26:21.149942 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-04-05 12:26:21.149961 | orchestrator | Saturday 05 April 2025 12:24:24 +0000 (0:00:00.782) 0:00:10.824 ******** 2025-04-05 12:26:21.149975 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:26:21.149989 | orchestrator | 2025-04-05 12:26:21.150003 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-04-05 12:26:21.150064 | orchestrator | Saturday 05 April 2025 12:24:25 +0000 (0:00:00.450) 0:00:11.275 ******** 2025-04-05 12:26:21.150082 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:26:21.150096 | orchestrator | 2025-04-05 12:26:21.150110 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-04-05 12:26:21.150132 | orchestrator | Saturday 05 April 2025 12:24:25 +0000 (0:00:00.251) 0:00:11.526 ******** 2025-04-05 12:26:21.150150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-05 12:26:21.150170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-05 12:26:21.150186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-05 12:26:21.150201 | orchestrator | 2025-04-05 12:26:21.150216 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-04-05 12:26:21.150230 | orchestrator | Saturday 05 April 2025 12:24:26 +0000 (0:00:00.760) 0:00:12.287 ******** 2025-04-05 12:26:21.150254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-05 12:26:21.150278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-05 12:26:21.150293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-05 12:26:21.150308 | orchestrator | 2025-04-05 12:26:21.150323 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-04-05 12:26:21.150337 | orchestrator | Saturday 05 April 2025 12:24:27 +0000 (0:00:01.500) 0:00:13.788 ******** 2025-04-05 12:26:21.150351 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-05 12:26:21.150365 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-05 12:26:21.150385 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-05 12:26:21.150400 | orchestrator | 2025-04-05 12:26:21.150414 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-04-05 12:26:21.150428 | orchestrator | Saturday 05 April 2025 12:24:29 +0000 (0:00:01.432) 0:00:15.221 ******** 2025-04-05 12:26:21.150442 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-05 12:26:21.150461 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-05 12:26:21.150475 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-05 12:26:21.150489 | orchestrator | 2025-04-05 12:26:21.150509 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-04-05 12:26:21.150530 | orchestrator | Saturday 05 April 2025 12:24:31 +0000 (0:00:02.342) 0:00:17.564 ******** 2025-04-05 12:26:21.150544 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-05 12:26:21.150558 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-05 12:26:21.150572 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-05 12:26:21.150629 | orchestrator | 2025-04-05 12:26:21.150646 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-04-05 12:26:21.150660 | orchestrator | Saturday 05 April 2025 12:24:33 +0000 (0:00:02.050) 0:00:19.614 ******** 2025-04-05 12:26:21.150674 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-05 12:26:21.150688 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-05 12:26:21.150702 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-05 12:26:21.150716 | orchestrator | 2025-04-05 12:26:21.150730 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-04-05 12:26:21.150744 | orchestrator | Saturday 05 April 2025 12:24:36 +0000 (0:00:02.580) 0:00:22.194 ******** 2025-04-05 12:26:21.150759 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-05 12:26:21.150826 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-05 12:26:21.150841 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-05 12:26:21.150855 | orchestrator | 2025-04-05 12:26:21.150869 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-04-05 12:26:21.150883 | orchestrator | Saturday 05 April 2025 12:24:37 +0000 (0:00:01.566) 0:00:23.761 ******** 2025-04-05 12:26:21.150897 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-05 12:26:21.150911 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-05 12:26:21.150925 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-05 12:26:21.150939 | orchestrator | 2025-04-05 12:26:21.150953 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-05 12:26:21.150968 | orchestrator | Saturday 05 April 2025 12:24:39 +0000 (0:00:01.977) 0:00:25.738 ******** 2025-04-05 12:26:21.150982 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:26:21.150996 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:26:21.151010 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:26:21.151024 | orchestrator | 2025-04-05 12:26:21.151038 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-04-05 12:26:21.151052 | orchestrator | Saturday 05 April 2025 12:24:40 +0000 (0:00:01.005) 0:00:26.745 ******** 2025-04-05 12:26:21.151065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-05 12:26:21.151096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-05 12:26:21.151111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-05 12:26:21.151124 | orchestrator | 2025-04-05 12:26:21.151137 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-04-05 12:26:21.151149 | orchestrator | Saturday 05 April 2025 12:24:42 +0000 (0:00:01.899) 0:00:28.644 ******** 2025-04-05 12:26:21.151161 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:26:21.151174 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:26:21.151186 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:26:21.151199 | orchestrator | 2025-04-05 12:26:21.151211 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-04-05 12:26:21.151224 | orchestrator | Saturday 05 April 2025 12:24:43 +0000 (0:00:01.065) 0:00:29.710 ******** 2025-04-05 12:26:21.151236 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:26:21.151249 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:26:21.151261 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:26:21.151274 | orchestrator | 2025-04-05 12:26:21.151286 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-04-05 12:26:21.151299 | orchestrator | Saturday 05 April 2025 12:24:50 +0000 (0:00:06.353) 0:00:36.064 ******** 2025-04-05 12:26:21.151311 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:26:21.151323 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:26:21.151336 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:26:21.151348 | orchestrator | 2025-04-05 12:26:21.151366 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-05 12:26:21.151379 | orchestrator | 2025-04-05 12:26:21.151392 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-05 12:26:21.151405 | orchestrator | Saturday 05 April 2025 12:24:50 +0000 (0:00:00.332) 0:00:36.396 ******** 2025-04-05 12:26:21.151417 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:26:21.151442 | orchestrator | 2025-04-05 12:26:21.151454 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-05 12:26:21.151467 | orchestrator | Saturday 05 April 2025 12:24:51 +0000 (0:00:00.697) 0:00:37.093 ******** 2025-04-05 12:26:21.151479 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:26:21.151491 | orchestrator | 2025-04-05 12:26:21.151504 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-05 12:26:21.151516 | orchestrator | Saturday 05 April 2025 12:24:51 +0000 (0:00:00.264) 0:00:37.358 ******** 2025-04-05 12:26:21.151529 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:26:21.151541 | orchestrator | 2025-04-05 12:26:21.151553 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-05 12:26:21.151566 | orchestrator | Saturday 05 April 2025 12:24:53 +0000 (0:00:02.306) 0:00:39.665 ******** 2025-04-05 12:26:21.151578 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:26:21.151590 | orchestrator | 2025-04-05 12:26:21.151603 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-05 12:26:21.151616 | orchestrator | 2025-04-05 12:26:21.151628 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-05 12:26:21.151640 | orchestrator | Saturday 05 April 2025 12:25:45 +0000 (0:00:51.261) 0:01:30.926 ******** 2025-04-05 12:26:21.151653 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:26:21.151665 | orchestrator | 2025-04-05 12:26:21.151677 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-05 12:26:21.151690 | orchestrator | Saturday 05 April 2025 12:25:45 +0000 (0:00:00.630) 0:01:31.556 ******** 2025-04-05 12:26:21.151702 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:26:21.151714 | orchestrator | 2025-04-05 12:26:21.151727 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-05 12:26:21.151739 | orchestrator | Saturday 05 April 2025 12:25:45 +0000 (0:00:00.172) 0:01:31.728 ******** 2025-04-05 12:26:21.151752 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:26:21.151805 | orchestrator | 2025-04-05 12:26:21.151819 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-05 12:26:21.151832 | orchestrator | Saturday 05 April 2025 12:25:47 +0000 (0:00:01.944) 0:01:33.673 ******** 2025-04-05 12:26:21.151875 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:26:21.151888 | orchestrator | 2025-04-05 12:26:21.151901 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-05 12:26:21.151913 | orchestrator | 2025-04-05 12:26:21.151926 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-05 12:26:21.151945 | orchestrator | Saturday 05 April 2025 12:25:59 +0000 (0:00:11.594) 0:01:45.268 ******** 2025-04-05 12:26:21.151958 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:26:21.151971 | orchestrator | 2025-04-05 12:26:21.151983 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-05 12:26:21.151996 | orchestrator | Saturday 05 April 2025 12:25:59 +0000 (0:00:00.599) 0:01:45.867 ******** 2025-04-05 12:26:21.152008 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:26:21.152021 | orchestrator | 2025-04-05 12:26:21.152033 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-05 12:26:21.152046 | orchestrator | Saturday 05 April 2025 12:26:00 +0000 (0:00:00.241) 0:01:46.108 ******** 2025-04-05 12:26:21.152058 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:26:21.152071 | orchestrator | 2025-04-05 12:26:21.152083 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-05 12:26:21.152095 | orchestrator | Saturday 05 April 2025 12:26:01 +0000 (0:00:01.682) 0:01:47.791 ******** 2025-04-05 12:26:21.152107 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:26:21.152120 | orchestrator | 2025-04-05 12:26:21.152132 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-04-05 12:26:21.152144 | orchestrator | 2025-04-05 12:26:21.152157 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-04-05 12:26:21.152169 | orchestrator | Saturday 05 April 2025 12:26:14 +0000 (0:00:12.997) 0:02:00.789 ******** 2025-04-05 12:26:21.152188 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:26:21.152201 | orchestrator | 2025-04-05 12:26:21.152219 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-04-05 12:26:21.152231 | orchestrator | Saturday 05 April 2025 12:26:15 +0000 (0:00:01.021) 0:02:01.810 ******** 2025-04-05 12:26:21.152243 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-04-05 12:26:21.152253 | orchestrator | enable_outward_rabbitmq_True 2025-04-05 12:26:21.152263 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-04-05 12:26:21.152273 | orchestrator | outward_rabbitmq_restart 2025-04-05 12:26:21.152284 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:26:21.152294 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:26:21.152304 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:26:21.152314 | orchestrator | 2025-04-05 12:26:21.152324 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-04-05 12:26:21.152334 | orchestrator | skipping: no hosts matched 2025-04-05 12:26:21.152344 | orchestrator | 2025-04-05 12:26:21.152354 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-04-05 12:26:21.152364 | orchestrator | skipping: no hosts matched 2025-04-05 12:26:21.152375 | orchestrator | 2025-04-05 12:26:21.152385 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-04-05 12:26:21.152395 | orchestrator | skipping: no hosts matched 2025-04-05 12:26:21.152405 | orchestrator | 2025-04-05 12:26:21.152415 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:26:21.152426 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-04-05 12:26:21.152436 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-05 12:26:21.152447 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:26:21.152457 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:26:21.152467 | orchestrator | 2025-04-05 12:26:21.152478 | orchestrator | 2025-04-05 12:26:21.152488 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:26:21.152498 | orchestrator | Saturday 05 April 2025 12:26:18 +0000 (0:00:02.399) 0:02:04.210 ******** 2025-04-05 12:26:21.152508 | orchestrator | =============================================================================== 2025-04-05 12:26:21.152518 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 75.85s 2025-04-05 12:26:21.152528 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.35s 2025-04-05 12:26:21.152538 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.93s 2025-04-05 12:26:21.152549 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.84s 2025-04-05 12:26:21.152559 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.58s 2025-04-05 12:26:21.152569 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.40s 2025-04-05 12:26:21.152579 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.34s 2025-04-05 12:26:21.152589 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.05s 2025-04-05 12:26:21.152599 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.98s 2025-04-05 12:26:21.152609 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.93s 2025-04-05 12:26:21.152619 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.90s 2025-04-05 12:26:21.152629 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.57s 2025-04-05 12:26:21.152644 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.50s 2025-04-05 12:26:21.152655 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.43s 2025-04-05 12:26:21.152665 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.07s 2025-04-05 12:26:21.152675 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.02s 2025-04-05 12:26:21.152690 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.01s 2025-04-05 12:26:21.152798 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.98s 2025-04-05 12:26:21.152813 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.94s 2025-04-05 12:26:21.152823 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.90s 2025-04-05 12:26:21.152833 | orchestrator | 2025-04-05 12:26:21 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:26:21.152847 | orchestrator | 2025-04-05 12:26:21 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:26:24.192402 | orchestrator | 2025-04-05 12:26:21 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:26:24.192517 | orchestrator | 2025-04-05 12:26:21 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:26:24.192554 | orchestrator | 2025-04-05 12:26:24 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:26:24.195135 | orchestrator | 2025-04-05 12:26:24 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:26:24.196848 | orchestrator | 2025-04-05 12:26:24 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:26:27.240390 | orchestrator | 2025-04-05 12:26:24 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:26:27.240526 | orchestrator | 2025-04-05 12:26:27 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:26:27.242645 | orchestrator | 2025-04-05 12:26:27 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:26:27.243376 | orchestrator | 2025-04-05 12:26:27 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:26:30.294476 | orchestrator | 2025-04-05 12:26:27 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:26:30.294611 | orchestrator | 2025-04-05 12:26:30 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:26:30.295670 | orchestrator | 2025-04-05 12:26:30 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:26:30.297561 | orchestrator | 2025-04-05 12:26:30 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:26:33.346335 | orchestrator | 2025-04-05 12:26:30 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:26:33.346471 | orchestrator | 2025-04-05 12:26:33 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:26:33.347383 | orchestrator | 2025-04-05 12:26:33 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:26:33.348084 | orchestrator | 2025-04-05 12:26:33 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:26:36.401210 | orchestrator | 2025-04-05 12:26:33 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:26:36.401345 | orchestrator | 2025-04-05 12:26:36 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:26:36.402283 | orchestrator | 2025-04-05 12:26:36 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:26:36.403679 | orchestrator | 2025-04-05 12:26:36 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:26:36.404249 | orchestrator | 2025-04-05 12:26:36 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:26:39.446609 | orchestrator | 2025-04-05 12:26:39 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:26:39.448839 | orchestrator | 2025-04-05 12:26:39 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:26:39.450309 | orchestrator | 2025-04-05 12:26:39 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:26:42.488939 | orchestrator | 2025-04-05 12:26:39 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:26:42.489070 | orchestrator | 2025-04-05 12:26:42 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:26:42.493158 | orchestrator | 2025-04-05 12:26:42 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:26:42.494853 | orchestrator | 2025-04-05 12:26:42 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:26:42.494934 | orchestrator | 2025-04-05 12:26:42 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:26:45.547373 | orchestrator | 2025-04-05 12:26:45 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:26:45.549512 | orchestrator | 2025-04-05 12:26:45 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:26:45.551445 | orchestrator | 2025-04-05 12:26:45 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:26:45.551594 | orchestrator | 2025-04-05 12:26:45 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:26:48.594389 | orchestrator | 2025-04-05 12:26:48 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:26:48.595060 | orchestrator | 2025-04-05 12:26:48 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:26:48.596121 | orchestrator | 2025-04-05 12:26:48 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:26:48.596199 | orchestrator | 2025-04-05 12:26:48 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:26:51.634956 | orchestrator | 2025-04-05 12:26:51 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:26:51.635439 | orchestrator | 2025-04-05 12:26:51 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:26:51.635481 | orchestrator | 2025-04-05 12:26:51 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:26:54.680487 | orchestrator | 2025-04-05 12:26:51 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:26:54.680622 | orchestrator | 2025-04-05 12:26:54 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:26:54.682756 | orchestrator | 2025-04-05 12:26:54 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:26:54.682878 | orchestrator | 2025-04-05 12:26:54 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:26:54.682903 | orchestrator | 2025-04-05 12:26:54 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:26:57.719555 | orchestrator | 2025-04-05 12:26:57 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:26:57.719899 | orchestrator | 2025-04-05 12:26:57 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:26:57.722424 | orchestrator | 2025-04-05 12:26:57 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:27:00.757552 | orchestrator | 2025-04-05 12:26:57 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:00.757682 | orchestrator | 2025-04-05 12:27:00 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:00.760082 | orchestrator | 2025-04-05 12:27:00 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:27:00.762513 | orchestrator | 2025-04-05 12:27:00 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:27:03.794335 | orchestrator | 2025-04-05 12:27:00 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:03.794455 | orchestrator | 2025-04-05 12:27:03 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:03.796010 | orchestrator | 2025-04-05 12:27:03 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:27:03.797933 | orchestrator | 2025-04-05 12:27:03 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:27:03.798092 | orchestrator | 2025-04-05 12:27:03 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:06.843566 | orchestrator | 2025-04-05 12:27:06 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:06.844998 | orchestrator | 2025-04-05 12:27:06 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:27:06.847448 | orchestrator | 2025-04-05 12:27:06 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:27:06.847569 | orchestrator | 2025-04-05 12:27:06 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:09.899039 | orchestrator | 2025-04-05 12:27:09 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:12.941266 | orchestrator | 2025-04-05 12:27:09 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:27:12.941382 | orchestrator | 2025-04-05 12:27:09 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:27:12.941402 | orchestrator | 2025-04-05 12:27:09 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:12.941455 | orchestrator | 2025-04-05 12:27:12 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:12.941711 | orchestrator | 2025-04-05 12:27:12 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:27:12.942641 | orchestrator | 2025-04-05 12:27:12 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:27:15.967972 | orchestrator | 2025-04-05 12:27:12 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:15.968092 | orchestrator | 2025-04-05 12:27:15 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:18.996635 | orchestrator | 2025-04-05 12:27:15 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:27:18.996741 | orchestrator | 2025-04-05 12:27:15 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:27:18.996759 | orchestrator | 2025-04-05 12:27:15 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:18.996826 | orchestrator | 2025-04-05 12:27:18 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:18.999365 | orchestrator | 2025-04-05 12:27:18 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:27:19.007586 | orchestrator | 2025-04-05 12:27:19 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:27:19.007924 | orchestrator | 2025-04-05 12:27:19 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:22.042641 | orchestrator | 2025-04-05 12:27:22 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:22.045511 | orchestrator | 2025-04-05 12:27:22 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:27:22.047362 | orchestrator | 2025-04-05 12:27:22 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:27:22.047596 | orchestrator | 2025-04-05 12:27:22 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:25.083293 | orchestrator | 2025-04-05 12:27:25 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:25.083525 | orchestrator | 2025-04-05 12:27:25 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:27:25.084254 | orchestrator | 2025-04-05 12:27:25 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state STARTED 2025-04-05 12:27:28.122176 | orchestrator | 2025-04-05 12:27:25 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:28.122304 | orchestrator | 2025-04-05 12:27:28 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:28.123721 | orchestrator | 2025-04-05 12:27:28 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:27:28.123756 | orchestrator | 2025-04-05 12:27:28 | INFO  | Task 5d67e1b2-93a1-4936-9a3f-7b3782034294 is in state SUCCESS 2025-04-05 12:27:28.126456 | orchestrator | 2025-04-05 12:27:28.126508 | orchestrator | 2025-04-05 12:27:28.126523 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:27:28.126538 | orchestrator | 2025-04-05 12:27:28.126552 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:27:28.126566 | orchestrator | Saturday 05 April 2025 12:25:07 +0000 (0:00:00.137) 0:00:00.137 ******** 2025-04-05 12:27:28.126581 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:27:28.126596 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:27:28.126609 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:27:28.126623 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:27:28.126637 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:27:28.126651 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:27:28.126665 | orchestrator | 2025-04-05 12:27:28.126687 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:27:28.126701 | orchestrator | Saturday 05 April 2025 12:25:08 +0000 (0:00:00.666) 0:00:00.804 ******** 2025-04-05 12:27:28.126715 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-04-05 12:27:28.126729 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-04-05 12:27:28.126743 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-04-05 12:27:28.126757 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-04-05 12:27:28.126821 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-04-05 12:27:28.126836 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-04-05 12:27:28.126849 | orchestrator | 2025-04-05 12:27:28.126864 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-04-05 12:27:28.126877 | orchestrator | 2025-04-05 12:27:28.126891 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-04-05 12:27:28.126905 | orchestrator | Saturday 05 April 2025 12:25:09 +0000 (0:00:00.988) 0:00:01.792 ******** 2025-04-05 12:27:28.126920 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:27:28.126935 | orchestrator | 2025-04-05 12:27:28.126949 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-04-05 12:27:28.126963 | orchestrator | Saturday 05 April 2025 12:25:10 +0000 (0:00:00.999) 0:00:02.791 ******** 2025-04-05 12:27:28.126978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127053 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127068 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127085 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127101 | orchestrator | 2025-04-05 12:27:28.127128 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-04-05 12:27:28.127144 | orchestrator | Saturday 05 April 2025 12:25:11 +0000 (0:00:01.185) 0:00:03.976 ******** 2025-04-05 12:27:28.127160 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127181 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127253 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127268 | orchestrator | 2025-04-05 12:27:28.127284 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-04-05 12:27:28.127300 | orchestrator | Saturday 05 April 2025 12:25:13 +0000 (0:00:02.265) 0:00:06.241 ******** 2025-04-05 12:27:28.127315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127371 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127385 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127410 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127424 | orchestrator | 2025-04-05 12:27:28.127438 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-04-05 12:27:28.127452 | orchestrator | Saturday 05 April 2025 12:25:15 +0000 (0:00:02.091) 0:00:08.333 ******** 2025-04-05 12:27:28.127466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127508 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127522 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127536 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127550 | orchestrator | 2025-04-05 12:27:28.127569 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-04-05 12:27:28.127583 | orchestrator | Saturday 05 April 2025 12:25:18 +0000 (0:00:02.753) 0:00:11.086 ******** 2025-04-05 12:27:28.127602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127649 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127663 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127677 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.127691 | orchestrator | 2025-04-05 12:27:28.127705 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-04-05 12:27:28.127719 | orchestrator | Saturday 05 April 2025 12:25:20 +0000 (0:00:01.645) 0:00:12.732 ******** 2025-04-05 12:27:28.127733 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:27:28.127747 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:27:28.127776 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:27:28.127792 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:27:28.127806 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:27:28.127819 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:27:28.127833 | orchestrator | 2025-04-05 12:27:28.127847 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-04-05 12:27:28.127861 | orchestrator | Saturday 05 April 2025 12:25:22 +0000 (0:00:02.912) 0:00:15.644 ******** 2025-04-05 12:27:28.127875 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-04-05 12:27:28.127890 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-04-05 12:27:28.127904 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-04-05 12:27:28.127917 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-04-05 12:27:28.127931 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-04-05 12:27:28.127945 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-04-05 12:27:28.127958 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-05 12:27:28.127978 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-05 12:27:28.127998 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-05 12:27:28.128012 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-05 12:27:28.128026 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-05 12:27:28.128040 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-05 12:27:28.128054 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-05 12:27:28.128070 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-05 12:27:28.128084 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-05 12:27:28.128098 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-05 12:27:28.128112 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-05 12:27:28.128126 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-05 12:27:28.128140 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-05 12:27:28.128155 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-05 12:27:28.128169 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-05 12:27:28.128183 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-05 12:27:28.128196 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-05 12:27:28.128210 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-05 12:27:28.128223 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-05 12:27:28.128237 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-05 12:27:28.128251 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-05 12:27:28.128264 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-05 12:27:28.128278 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-05 12:27:28.128291 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-05 12:27:28.128305 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-05 12:27:28.128319 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-05 12:27:28.128333 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-05 12:27:28.128346 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-05 12:27:28.128360 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-05 12:27:28.128374 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-05 12:27:28.128387 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-05 12:27:28.128407 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-05 12:27:28.128421 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-05 12:27:28.128434 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-05 12:27:28.128448 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-05 12:27:28.128462 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-05 12:27:28.128476 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-04-05 12:27:28.128490 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-04-05 12:27:28.128509 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-04-05 12:27:28.128524 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-04-05 12:27:28.128538 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-04-05 12:27:28.128552 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-04-05 12:27:28.128565 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-05 12:27:28.128579 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-05 12:27:28.128594 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-05 12:27:28.128608 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-05 12:27:28.128622 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-05 12:27:28.128635 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-05 12:27:28.128649 | orchestrator | 2025-04-05 12:27:28.128663 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-05 12:27:28.128685 | orchestrator | Saturday 05 April 2025 12:25:40 +0000 (0:00:17.704) 0:00:33.349 ******** 2025-04-05 12:27:28.128700 | orchestrator | 2025-04-05 12:27:28.128714 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-05 12:27:28.128728 | orchestrator | Saturday 05 April 2025 12:25:40 +0000 (0:00:00.084) 0:00:33.434 ******** 2025-04-05 12:27:28.128741 | orchestrator | 2025-04-05 12:27:28.128755 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-05 12:27:28.128824 | orchestrator | Saturday 05 April 2025 12:25:41 +0000 (0:00:00.451) 0:00:33.885 ******** 2025-04-05 12:27:28.128840 | orchestrator | 2025-04-05 12:27:28.128854 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-05 12:27:28.128867 | orchestrator | Saturday 05 April 2025 12:25:41 +0000 (0:00:00.082) 0:00:33.968 ******** 2025-04-05 12:27:28.128881 | orchestrator | 2025-04-05 12:27:28.128895 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-05 12:27:28.128909 | orchestrator | Saturday 05 April 2025 12:25:41 +0000 (0:00:00.084) 0:00:34.052 ******** 2025-04-05 12:27:28.128923 | orchestrator | 2025-04-05 12:27:28.128949 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-05 12:27:28.128962 | orchestrator | Saturday 05 April 2025 12:25:41 +0000 (0:00:00.085) 0:00:34.138 ******** 2025-04-05 12:27:28.128974 | orchestrator | 2025-04-05 12:27:28.128986 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-04-05 12:27:28.128998 | orchestrator | Saturday 05 April 2025 12:25:41 +0000 (0:00:00.352) 0:00:34.491 ******** 2025-04-05 12:27:28.129011 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:27:28.129023 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:27:28.129035 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:27:28.129048 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:27:28.129065 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:27:28.129077 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:27:28.129089 | orchestrator | 2025-04-05 12:27:28.129102 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-04-05 12:27:28.129114 | orchestrator | Saturday 05 April 2025 12:25:44 +0000 (0:00:02.355) 0:00:36.847 ******** 2025-04-05 12:27:28.129127 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:27:28.129139 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:27:28.129151 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:27:28.129164 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:27:28.129176 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:27:28.129188 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:27:28.129200 | orchestrator | 2025-04-05 12:27:28.129212 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-04-05 12:27:28.129225 | orchestrator | 2025-04-05 12:27:28.129237 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-05 12:27:28.129250 | orchestrator | Saturday 05 April 2025 12:26:07 +0000 (0:00:23.296) 0:01:00.143 ******** 2025-04-05 12:27:28.129262 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:27:28.129275 | orchestrator | 2025-04-05 12:27:28.129287 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-05 12:27:28.129299 | orchestrator | Saturday 05 April 2025 12:26:08 +0000 (0:00:00.638) 0:01:00.781 ******** 2025-04-05 12:27:28.129312 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:27:28.129324 | orchestrator | 2025-04-05 12:27:28.129336 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-04-05 12:27:28.129349 | orchestrator | Saturday 05 April 2025 12:26:08 +0000 (0:00:00.623) 0:01:01.404 ******** 2025-04-05 12:27:28.129361 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:27:28.129373 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:27:28.129385 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:27:28.129397 | orchestrator | 2025-04-05 12:27:28.129409 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-04-05 12:27:28.129422 | orchestrator | Saturday 05 April 2025 12:26:09 +0000 (0:00:00.690) 0:01:02.095 ******** 2025-04-05 12:27:28.129434 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:27:28.129446 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:27:28.129458 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:27:28.129476 | orchestrator | 2025-04-05 12:27:28.129489 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-04-05 12:27:28.129501 | orchestrator | Saturday 05 April 2025 12:26:09 +0000 (0:00:00.349) 0:01:02.445 ******** 2025-04-05 12:27:28.129513 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:27:28.129526 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:27:28.129538 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:27:28.129550 | orchestrator | 2025-04-05 12:27:28.129562 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-04-05 12:27:28.129574 | orchestrator | Saturday 05 April 2025 12:26:10 +0000 (0:00:00.320) 0:01:02.765 ******** 2025-04-05 12:27:28.129587 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:27:28.129599 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:27:28.129611 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:27:28.129629 | orchestrator | 2025-04-05 12:27:28.129641 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-04-05 12:27:28.129654 | orchestrator | Saturday 05 April 2025 12:26:10 +0000 (0:00:00.319) 0:01:03.084 ******** 2025-04-05 12:27:28.129672 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:27:28.129685 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:27:28.129697 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:27:28.129710 | orchestrator | 2025-04-05 12:27:28.129722 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-04-05 12:27:28.129734 | orchestrator | Saturday 05 April 2025 12:26:10 +0000 (0:00:00.256) 0:01:03.341 ******** 2025-04-05 12:27:28.129747 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.129759 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.129812 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.129825 | orchestrator | 2025-04-05 12:27:28.129838 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-04-05 12:27:28.129850 | orchestrator | Saturday 05 April 2025 12:26:10 +0000 (0:00:00.311) 0:01:03.653 ******** 2025-04-05 12:27:28.129862 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.129875 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.129887 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.129900 | orchestrator | 2025-04-05 12:27:28.129912 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-04-05 12:27:28.129925 | orchestrator | Saturday 05 April 2025 12:26:11 +0000 (0:00:00.302) 0:01:03.955 ******** 2025-04-05 12:27:28.129937 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.129950 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.129962 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.129974 | orchestrator | 2025-04-05 12:27:28.129986 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-04-05 12:27:28.129996 | orchestrator | Saturday 05 April 2025 12:26:11 +0000 (0:00:00.235) 0:01:04.191 ******** 2025-04-05 12:27:28.130006 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.130043 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.130056 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.130066 | orchestrator | 2025-04-05 12:27:28.130076 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-04-05 12:27:28.130087 | orchestrator | Saturday 05 April 2025 12:26:11 +0000 (0:00:00.302) 0:01:04.494 ******** 2025-04-05 12:27:28.130097 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.130107 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.130117 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.130127 | orchestrator | 2025-04-05 12:27:28.130137 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-04-05 12:27:28.130148 | orchestrator | Saturday 05 April 2025 12:26:12 +0000 (0:00:00.309) 0:01:04.804 ******** 2025-04-05 12:27:28.130158 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.130168 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.130178 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.130188 | orchestrator | 2025-04-05 12:27:28.130199 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-04-05 12:27:28.130209 | orchestrator | Saturday 05 April 2025 12:26:12 +0000 (0:00:00.292) 0:01:05.097 ******** 2025-04-05 12:27:28.130219 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.130229 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.130239 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.130249 | orchestrator | 2025-04-05 12:27:28.130260 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-04-05 12:27:28.130270 | orchestrator | Saturday 05 April 2025 12:26:12 +0000 (0:00:00.242) 0:01:05.339 ******** 2025-04-05 12:27:28.130280 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.130290 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.130300 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.130316 | orchestrator | 2025-04-05 12:27:28.130326 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-04-05 12:27:28.130336 | orchestrator | Saturday 05 April 2025 12:26:12 +0000 (0:00:00.274) 0:01:05.614 ******** 2025-04-05 12:27:28.130346 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.130356 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.130366 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.130376 | orchestrator | 2025-04-05 12:27:28.130386 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-04-05 12:27:28.130396 | orchestrator | Saturday 05 April 2025 12:26:13 +0000 (0:00:00.265) 0:01:05.879 ******** 2025-04-05 12:27:28.130406 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.130417 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.130432 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.130443 | orchestrator | 2025-04-05 12:27:28.130453 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-04-05 12:27:28.130463 | orchestrator | Saturday 05 April 2025 12:26:13 +0000 (0:00:00.245) 0:01:06.125 ******** 2025-04-05 12:27:28.130473 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.130484 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.130494 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.130504 | orchestrator | 2025-04-05 12:27:28.130514 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-04-05 12:27:28.130524 | orchestrator | Saturday 05 April 2025 12:26:13 +0000 (0:00:00.313) 0:01:06.438 ******** 2025-04-05 12:27:28.130534 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.130544 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.130559 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.130569 | orchestrator | 2025-04-05 12:27:28.130580 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-05 12:27:28.130589 | orchestrator | Saturday 05 April 2025 12:26:14 +0000 (0:00:00.369) 0:01:06.808 ******** 2025-04-05 12:27:28.130603 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:27:28.130613 | orchestrator | 2025-04-05 12:27:28.130623 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-04-05 12:27:28.130633 | orchestrator | Saturday 05 April 2025 12:26:14 +0000 (0:00:00.675) 0:01:07.483 ******** 2025-04-05 12:27:28.130643 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:27:28.130653 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:27:28.130663 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:27:28.130673 | orchestrator | 2025-04-05 12:27:28.130684 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-04-05 12:27:28.130694 | orchestrator | Saturday 05 April 2025 12:26:15 +0000 (0:00:00.693) 0:01:08.176 ******** 2025-04-05 12:27:28.130704 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:27:28.130714 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:27:28.130724 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:27:28.130734 | orchestrator | 2025-04-05 12:27:28.130744 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-04-05 12:27:28.130754 | orchestrator | Saturday 05 April 2025 12:26:16 +0000 (0:00:00.867) 0:01:09.044 ******** 2025-04-05 12:27:28.130779 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.130790 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.130800 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.130810 | orchestrator | 2025-04-05 12:27:28.130820 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-04-05 12:27:28.130834 | orchestrator | Saturday 05 April 2025 12:26:16 +0000 (0:00:00.273) 0:01:09.317 ******** 2025-04-05 12:27:28.130844 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.130854 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.130864 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.130874 | orchestrator | 2025-04-05 12:27:28.130884 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-04-05 12:27:28.130899 | orchestrator | Saturday 05 April 2025 12:26:16 +0000 (0:00:00.309) 0:01:09.627 ******** 2025-04-05 12:27:28.130909 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.130919 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.130929 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.130939 | orchestrator | 2025-04-05 12:27:28.130949 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-04-05 12:27:28.130959 | orchestrator | Saturday 05 April 2025 12:26:17 +0000 (0:00:00.334) 0:01:09.961 ******** 2025-04-05 12:27:28.130969 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.130979 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.130989 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.130999 | orchestrator | 2025-04-05 12:27:28.131010 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-04-05 12:27:28.131020 | orchestrator | Saturday 05 April 2025 12:26:17 +0000 (0:00:00.327) 0:01:10.289 ******** 2025-04-05 12:27:28.131030 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.131040 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.131050 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.131060 | orchestrator | 2025-04-05 12:27:28.131070 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-04-05 12:27:28.131080 | orchestrator | Saturday 05 April 2025 12:26:17 +0000 (0:00:00.243) 0:01:10.532 ******** 2025-04-05 12:27:28.131091 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.131101 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.131111 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.131121 | orchestrator | 2025-04-05 12:27:28.131131 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-04-05 12:27:28.131141 | orchestrator | Saturday 05 April 2025 12:26:18 +0000 (0:00:00.333) 0:01:10.866 ******** 2025-04-05 12:27:28.131152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131262 | orchestrator | 2025-04-05 12:27:28.131272 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-04-05 12:27:28.131282 | orchestrator | Saturday 05 April 2025 12:26:20 +0000 (0:00:01.830) 0:01:12.697 ******** 2025-04-05 12:27:28.131292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131397 | orchestrator | 2025-04-05 12:27:28.131407 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-04-05 12:27:28.131417 | orchestrator | Saturday 05 April 2025 12:26:23 +0000 (0:00:03.505) 0:01:16.202 ******** 2025-04-05 12:27:28.131427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.131530 | orchestrator | 2025-04-05 12:27:28.131540 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-05 12:27:28.131550 | orchestrator | Saturday 05 April 2025 12:26:26 +0000 (0:00:02.635) 0:01:18.838 ******** 2025-04-05 12:27:28.131560 | orchestrator | 2025-04-05 12:27:28.131570 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-05 12:27:28.131580 | orchestrator | Saturday 05 April 2025 12:26:26 +0000 (0:00:00.063) 0:01:18.901 ******** 2025-04-05 12:27:28.131590 | orchestrator | 2025-04-05 12:27:28.131601 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-05 12:27:28.131611 | orchestrator | Saturday 05 April 2025 12:26:26 +0000 (0:00:00.192) 0:01:19.093 ******** 2025-04-05 12:27:28.131621 | orchestrator | 2025-04-05 12:27:28.131631 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-04-05 12:27:28.131641 | orchestrator | Saturday 05 April 2025 12:26:26 +0000 (0:00:00.080) 0:01:19.174 ******** 2025-04-05 12:27:28.131651 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:27:28.131661 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:27:28.131671 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:27:28.131681 | orchestrator | 2025-04-05 12:27:28.131691 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-04-05 12:27:28.131701 | orchestrator | Saturday 05 April 2025 12:26:33 +0000 (0:00:06.878) 0:01:26.053 ******** 2025-04-05 12:27:28.131711 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:27:28.131721 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:27:28.131731 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:27:28.131741 | orchestrator | 2025-04-05 12:27:28.131751 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-04-05 12:27:28.131775 | orchestrator | Saturday 05 April 2025 12:26:41 +0000 (0:00:07.688) 0:01:33.741 ******** 2025-04-05 12:27:28.131786 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:27:28.131796 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:27:28.131806 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:27:28.131816 | orchestrator | 2025-04-05 12:27:28.131826 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-04-05 12:27:28.131840 | orchestrator | Saturday 05 April 2025 12:26:48 +0000 (0:00:07.383) 0:01:41.124 ******** 2025-04-05 12:27:28.131850 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.131860 | orchestrator | 2025-04-05 12:27:28.131870 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-04-05 12:27:28.131880 | orchestrator | Saturday 05 April 2025 12:26:48 +0000 (0:00:00.110) 0:01:41.234 ******** 2025-04-05 12:27:28.131898 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:27:28.131908 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:27:28.131918 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:27:28.131928 | orchestrator | 2025-04-05 12:27:28.131938 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-04-05 12:27:28.131951 | orchestrator | Saturday 05 April 2025 12:26:49 +0000 (0:00:00.955) 0:01:42.190 ******** 2025-04-05 12:27:28.131962 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.131972 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.131982 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:27:28.131992 | orchestrator | 2025-04-05 12:27:28.132002 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-04-05 12:27:28.132012 | orchestrator | Saturday 05 April 2025 12:26:50 +0000 (0:00:00.758) 0:01:42.949 ******** 2025-04-05 12:27:28.132022 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:27:28.132032 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:27:28.132042 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:27:28.132056 | orchestrator | 2025-04-05 12:27:28.132066 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-04-05 12:27:28.132076 | orchestrator | Saturday 05 April 2025 12:26:50 +0000 (0:00:00.680) 0:01:43.629 ******** 2025-04-05 12:27:28.132086 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.132096 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.132106 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:27:28.132116 | orchestrator | 2025-04-05 12:27:28.132126 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-04-05 12:27:28.132136 | orchestrator | Saturday 05 April 2025 12:26:51 +0000 (0:00:00.508) 0:01:44.137 ******** 2025-04-05 12:27:28.132146 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:27:28.132156 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:27:28.132170 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:27:28.132181 | orchestrator | 2025-04-05 12:27:28.132191 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-04-05 12:27:28.132201 | orchestrator | Saturday 05 April 2025 12:26:52 +0000 (0:00:00.801) 0:01:44.939 ******** 2025-04-05 12:27:28.132211 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:27:28.132221 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:27:28.132230 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:27:28.132240 | orchestrator | 2025-04-05 12:27:28.132250 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-04-05 12:27:28.132260 | orchestrator | Saturday 05 April 2025 12:26:53 +0000 (0:00:00.938) 0:01:45.877 ******** 2025-04-05 12:27:28.132270 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:27:28.132280 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:27:28.132290 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:27:28.132300 | orchestrator | 2025-04-05 12:27:28.132310 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-04-05 12:27:28.132320 | orchestrator | Saturday 05 April 2025 12:26:53 +0000 (0:00:00.283) 0:01:46.161 ******** 2025-04-05 12:27:28.132330 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132344 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132354 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132370 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132383 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132394 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132404 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132414 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132429 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132439 | orchestrator | 2025-04-05 12:27:28.132450 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-04-05 12:27:28.132460 | orchestrator | Saturday 05 April 2025 12:26:54 +0000 (0:00:01.442) 0:01:47.603 ******** 2025-04-05 12:27:28.132470 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132480 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132490 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132505 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132535 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132569 | orchestrator | 2025-04-05 12:27:28.132579 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-04-05 12:27:28.132589 | orchestrator | Saturday 05 April 2025 12:26:59 +0000 (0:00:04.833) 0:01:52.437 ******** 2025-04-05 12:27:28.132603 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132614 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132624 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132638 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132648 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132658 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132668 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132678 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132688 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:27:28.132698 | orchestrator | 2025-04-05 12:27:28.132709 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-05 12:27:28.132719 | orchestrator | Saturday 05 April 2025 12:27:02 +0000 (0:00:02.980) 0:01:55.417 ******** 2025-04-05 12:27:28.132729 | orchestrator | 2025-04-05 12:27:28.132739 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-05 12:27:28.132749 | orchestrator | Saturday 05 April 2025 12:27:02 +0000 (0:00:00.164) 0:01:55.582 ******** 2025-04-05 12:27:28.132759 | orchestrator | 2025-04-05 12:27:28.132800 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-05 12:27:28.132811 | orchestrator | Saturday 05 April 2025 12:27:02 +0000 (0:00:00.059) 0:01:55.641 ******** 2025-04-05 12:27:28.132821 | orchestrator | 2025-04-05 12:27:28.132831 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-04-05 12:27:28.132841 | orchestrator | Saturday 05 April 2025 12:27:03 +0000 (0:00:00.052) 0:01:55.694 ******** 2025-04-05 12:27:28.132851 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:27:28.132861 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:27:28.132871 | orchestrator | 2025-04-05 12:27:28.132885 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-04-05 12:27:28.132896 | orchestrator | Saturday 05 April 2025 12:27:09 +0000 (0:00:06.575) 0:02:02.269 ******** 2025-04-05 12:27:28.132906 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:27:28.132916 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:27:28.132931 | orchestrator | 2025-04-05 12:27:28.132942 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-04-05 12:27:28.132952 | orchestrator | Saturday 05 April 2025 12:27:16 +0000 (0:00:06.521) 0:02:08.790 ******** 2025-04-05 12:27:28.132962 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:27:28.132971 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:27:28.132982 | orchestrator | 2025-04-05 12:27:28.132992 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-04-05 12:27:28.133002 | orchestrator | Saturday 05 April 2025 12:27:22 +0000 (0:00:06.609) 0:02:15.399 ******** 2025-04-05 12:27:28.133012 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:27:28.133022 | orchestrator | 2025-04-05 12:27:28.133032 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-04-05 12:27:28.133046 | orchestrator | Saturday 05 April 2025 12:27:22 +0000 (0:00:00.113) 0:02:15.513 ******** 2025-04-05 12:27:28.133056 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:27:28.133066 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:27:28.133076 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:27:28.133086 | orchestrator | 2025-04-05 12:27:28.133096 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-04-05 12:27:28.133106 | orchestrator | Saturday 05 April 2025 12:27:23 +0000 (0:00:00.887) 0:02:16.400 ******** 2025-04-05 12:27:28.133116 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.133126 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.133136 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:27:28.133146 | orchestrator | 2025-04-05 12:27:28.133156 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-04-05 12:27:28.133166 | orchestrator | Saturday 05 April 2025 12:27:24 +0000 (0:00:00.672) 0:02:17.073 ******** 2025-04-05 12:27:28.133176 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:27:28.133186 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:27:28.133195 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:27:28.133205 | orchestrator | 2025-04-05 12:27:28.133215 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-04-05 12:27:28.133225 | orchestrator | Saturday 05 April 2025 12:27:25 +0000 (0:00:00.877) 0:02:17.950 ******** 2025-04-05 12:27:28.133235 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:27:28.133245 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:27:28.133255 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:27:28.133265 | orchestrator | 2025-04-05 12:27:28.133275 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-04-05 12:27:28.133285 | orchestrator | Saturday 05 April 2025 12:27:26 +0000 (0:00:00.737) 0:02:18.688 ******** 2025-04-05 12:27:28.133295 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:27:28.133305 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:27:28.133315 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:27:28.133323 | orchestrator | 2025-04-05 12:27:28.133332 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-04-05 12:27:28.133340 | orchestrator | Saturday 05 April 2025 12:27:26 +0000 (0:00:00.717) 0:02:19.406 ******** 2025-04-05 12:27:28.133348 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:27:28.133357 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:27:28.133365 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:27:28.133374 | orchestrator | 2025-04-05 12:27:28.133382 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:27:28.133391 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-04-05 12:27:28.133400 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-04-05 12:27:28.133408 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-04-05 12:27:28.133422 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:27:28.133430 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:27:28.133439 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:27:28.133448 | orchestrator | 2025-04-05 12:27:28.133456 | orchestrator | 2025-04-05 12:27:28.133465 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:27:28.133473 | orchestrator | Saturday 05 April 2025 12:27:27 +0000 (0:00:00.998) 0:02:20.404 ******** 2025-04-05 12:27:28.133482 | orchestrator | =============================================================================== 2025-04-05 12:27:28.133490 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 23.30s 2025-04-05 12:27:28.133499 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 17.70s 2025-04-05 12:27:28.133507 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.21s 2025-04-05 12:27:28.133515 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.99s 2025-04-05 12:27:28.133524 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.45s 2025-04-05 12:27:28.133532 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.83s 2025-04-05 12:27:28.133541 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.51s 2025-04-05 12:27:28.133552 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.98s 2025-04-05 12:27:31.183919 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.91s 2025-04-05 12:27:31.184032 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.75s 2025-04-05 12:27:31.184048 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.64s 2025-04-05 12:27:31.184062 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.36s 2025-04-05 12:27:31.184076 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.27s 2025-04-05 12:27:31.184090 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 2.09s 2025-04-05 12:27:31.184104 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.83s 2025-04-05 12:27:31.184117 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.65s 2025-04-05 12:27:31.184131 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.44s 2025-04-05 12:27:31.184162 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.19s 2025-04-05 12:27:31.184176 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 1.14s 2025-04-05 12:27:31.184190 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.00s 2025-04-05 12:27:31.184203 | orchestrator | 2025-04-05 12:27:28 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:31.184236 | orchestrator | 2025-04-05 12:27:31 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:34.220604 | orchestrator | 2025-04-05 12:27:31 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:27:34.220723 | orchestrator | 2025-04-05 12:27:31 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:34.220758 | orchestrator | 2025-04-05 12:27:34 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:34.223071 | orchestrator | 2025-04-05 12:27:34 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:27:37.269645 | orchestrator | 2025-04-05 12:27:34 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:37.269805 | orchestrator | 2025-04-05 12:27:37 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:37.272003 | orchestrator | 2025-04-05 12:27:37 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:27:37.272043 | orchestrator | 2025-04-05 12:27:37 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:40.317081 | orchestrator | 2025-04-05 12:27:40 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:40.317410 | orchestrator | 2025-04-05 12:27:40 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:27:40.317454 | orchestrator | 2025-04-05 12:27:40 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:43.356124 | orchestrator | 2025-04-05 12:27:43 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:43.356512 | orchestrator | 2025-04-05 12:27:43 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:27:46.413605 | orchestrator | 2025-04-05 12:27:43 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:46.413736 | orchestrator | 2025-04-05 12:27:46 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:46.415298 | orchestrator | 2025-04-05 12:27:46 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:27:46.415453 | orchestrator | 2025-04-05 12:27:46 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:49.449117 | orchestrator | 2025-04-05 12:27:49 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:52.501852 | orchestrator | 2025-04-05 12:27:49 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:27:52.501970 | orchestrator | 2025-04-05 12:27:49 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:52.502005 | orchestrator | 2025-04-05 12:27:52 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:52.505150 | orchestrator | 2025-04-05 12:27:52 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:27:55.562806 | orchestrator | 2025-04-05 12:27:52 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:55.562941 | orchestrator | 2025-04-05 12:27:55 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:55.563241 | orchestrator | 2025-04-05 12:27:55 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:27:55.563673 | orchestrator | 2025-04-05 12:27:55 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:27:58.626728 | orchestrator | 2025-04-05 12:27:58 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:27:58.628335 | orchestrator | 2025-04-05 12:27:58 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:01.680356 | orchestrator | 2025-04-05 12:27:58 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:01.680492 | orchestrator | 2025-04-05 12:28:01 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:01.683880 | orchestrator | 2025-04-05 12:28:01 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:04.736958 | orchestrator | 2025-04-05 12:28:01 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:04.737092 | orchestrator | 2025-04-05 12:28:04 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:04.737903 | orchestrator | 2025-04-05 12:28:04 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:07.788809 | orchestrator | 2025-04-05 12:28:04 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:07.788958 | orchestrator | 2025-04-05 12:28:07 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:07.791016 | orchestrator | 2025-04-05 12:28:07 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:07.791120 | orchestrator | 2025-04-05 12:28:07 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:10.837050 | orchestrator | 2025-04-05 12:28:10 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:10.837625 | orchestrator | 2025-04-05 12:28:10 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:13.875205 | orchestrator | 2025-04-05 12:28:10 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:13.875331 | orchestrator | 2025-04-05 12:28:13 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:13.877016 | orchestrator | 2025-04-05 12:28:13 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:13.877288 | orchestrator | 2025-04-05 12:28:13 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:16.926238 | orchestrator | 2025-04-05 12:28:16 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:16.927721 | orchestrator | 2025-04-05 12:28:16 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:19.973388 | orchestrator | 2025-04-05 12:28:16 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:19.973510 | orchestrator | 2025-04-05 12:28:19 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:23.009894 | orchestrator | 2025-04-05 12:28:19 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:23.010000 | orchestrator | 2025-04-05 12:28:19 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:23.010151 | orchestrator | 2025-04-05 12:28:23 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:26.061734 | orchestrator | 2025-04-05 12:28:23 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:26.061929 | orchestrator | 2025-04-05 12:28:23 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:26.061982 | orchestrator | 2025-04-05 12:28:26 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:26.064877 | orchestrator | 2025-04-05 12:28:26 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:29.099433 | orchestrator | 2025-04-05 12:28:26 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:29.099582 | orchestrator | 2025-04-05 12:28:29 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:29.100981 | orchestrator | 2025-04-05 12:28:29 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:29.101304 | orchestrator | 2025-04-05 12:28:29 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:32.147261 | orchestrator | 2025-04-05 12:28:32 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:35.185247 | orchestrator | 2025-04-05 12:28:32 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:35.185381 | orchestrator | 2025-04-05 12:28:32 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:35.185420 | orchestrator | 2025-04-05 12:28:35 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:38.222818 | orchestrator | 2025-04-05 12:28:35 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:38.222948 | orchestrator | 2025-04-05 12:28:35 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:38.222981 | orchestrator | 2025-04-05 12:28:38 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:38.224623 | orchestrator | 2025-04-05 12:28:38 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:41.259527 | orchestrator | 2025-04-05 12:28:38 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:41.259656 | orchestrator | 2025-04-05 12:28:41 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:41.259855 | orchestrator | 2025-04-05 12:28:41 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:44.302415 | orchestrator | 2025-04-05 12:28:41 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:44.302538 | orchestrator | 2025-04-05 12:28:44 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:44.302973 | orchestrator | 2025-04-05 12:28:44 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:47.368348 | orchestrator | 2025-04-05 12:28:44 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:47.368674 | orchestrator | 2025-04-05 12:28:47 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:50.402667 | orchestrator | 2025-04-05 12:28:47 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:50.402799 | orchestrator | 2025-04-05 12:28:47 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:50.402834 | orchestrator | 2025-04-05 12:28:50 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:50.404109 | orchestrator | 2025-04-05 12:28:50 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:53.440368 | orchestrator | 2025-04-05 12:28:50 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:53.440525 | orchestrator | 2025-04-05 12:28:53 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:53.442179 | orchestrator | 2025-04-05 12:28:53 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:56.480327 | orchestrator | 2025-04-05 12:28:53 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:56.480453 | orchestrator | 2025-04-05 12:28:56 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:56.481168 | orchestrator | 2025-04-05 12:28:56 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:28:59.527160 | orchestrator | 2025-04-05 12:28:56 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:28:59.527296 | orchestrator | 2025-04-05 12:28:59 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:28:59.529963 | orchestrator | 2025-04-05 12:28:59 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:29:02.563760 | orchestrator | 2025-04-05 12:28:59 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:29:02.563941 | orchestrator | 2025-04-05 12:29:02 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:29:02.564376 | orchestrator | 2025-04-05 12:29:02 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:29:05.611070 | orchestrator | 2025-04-05 12:29:02 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:29:05.611714 | orchestrator | 2025-04-05 12:29:05 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:29:08.644283 | orchestrator | 2025-04-05 12:29:05 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:29:08.644394 | orchestrator | 2025-04-05 12:29:05 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:29:08.644429 | orchestrator | 2025-04-05 12:29:08 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:29:08.648044 | orchestrator | 2025-04-05 12:29:08 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:29:11.701125 | orchestrator | 2025-04-05 12:29:08 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:29:11.701261 | orchestrator | 2025-04-05 12:29:11 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:29:14.759160 | orchestrator | 2025-04-05 12:29:11 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:29:14.759276 | orchestrator | 2025-04-05 12:29:11 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:29:14.759310 | orchestrator | 2025-04-05 12:29:14 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:29:14.760180 | orchestrator | 2025-04-05 12:29:14 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:29:14.760288 | orchestrator | 2025-04-05 12:29:14 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:29:17.820280 | orchestrator | 2025-04-05 12:29:17 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:29:17.822225 | orchestrator | 2025-04-05 12:29:17 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:29:20.862969 | orchestrator | 2025-04-05 12:29:17 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:29:20.863088 | orchestrator | 2025-04-05 12:29:20 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:29:20.865199 | orchestrator | 2025-04-05 12:29:20 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:29:23.913310 | orchestrator | 2025-04-05 12:29:20 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:29:23.913446 | orchestrator | 2025-04-05 12:29:23 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:29:23.914652 | orchestrator | 2025-04-05 12:29:23 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:29:26.954908 | orchestrator | 2025-04-05 12:29:23 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:29:26.955053 | orchestrator | 2025-04-05 12:29:26 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:29:26.955400 | orchestrator | 2025-04-05 12:29:26 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:29:26.959077 | orchestrator | 2025-04-05 12:29:26 | INFO  | Task 2982897e-5afc-4a75-a6b8-547bb2bea7d9 is in state STARTED 2025-04-05 12:29:29.996865 | orchestrator | 2025-04-05 12:29:26 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:29:29.997104 | orchestrator | 2025-04-05 12:29:29 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:29:29.997727 | orchestrator | 2025-04-05 12:29:29 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:29:29.997790 | orchestrator | 2025-04-05 12:29:29 | INFO  | Task 2982897e-5afc-4a75-a6b8-547bb2bea7d9 is in state STARTED 2025-04-05 12:29:33.045912 | orchestrator | 2025-04-05 12:29:29 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:29:33.046086 | orchestrator | 2025-04-05 12:29:33 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:29:33.047805 | orchestrator | 2025-04-05 12:29:33 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:29:33.049385 | orchestrator | 2025-04-05 12:29:33 | INFO  | Task 2982897e-5afc-4a75-a6b8-547bb2bea7d9 is in state STARTED 2025-04-05 12:29:33.049850 | orchestrator | 2025-04-05 12:29:33 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:29:36.080818 | orchestrator | 2025-04-05 12:29:36 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:29:36.087018 | orchestrator | 2025-04-05 12:29:36 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:29:36.089810 | orchestrator | 2025-04-05 12:29:36 | INFO  | Task 2982897e-5afc-4a75-a6b8-547bb2bea7d9 is in state SUCCESS 2025-04-05 12:29:36.090623 | orchestrator | 2025-04-05 12:29:36 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:29:39.115486 | orchestrator | 2025-04-05 12:29:39 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:29:39.116173 | orchestrator | 2025-04-05 12:29:39 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:29:42.157224 | orchestrator | 2025-04-05 12:29:39 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:29:42.157357 | orchestrator | 2025-04-05 12:29:42 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:29:45.205353 | orchestrator | 2025-04-05 12:29:42 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:29:45.205464 | orchestrator | 2025-04-05 12:29:42 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:29:45.205502 | orchestrator | 2025-04-05 12:29:45 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:29:45.206853 | orchestrator | 2025-04-05 12:29:45 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:29:48.252964 | orchestrator | 2025-04-05 12:29:45 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:29:48.253100 | orchestrator | 2025-04-05 12:29:48 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:29:48.253877 | orchestrator | 2025-04-05 12:29:48 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:29:51.293558 | orchestrator | 2025-04-05 12:29:48 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:29:51.293683 | orchestrator | 2025-04-05 12:29:51 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:29:51.294728 | orchestrator | 2025-04-05 12:29:51 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:29:51.294896 | orchestrator | 2025-04-05 12:29:51 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:29:54.334851 | orchestrator | 2025-04-05 12:29:54 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:29:54.335724 | orchestrator | 2025-04-05 12:29:54 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:29:57.378844 | orchestrator | 2025-04-05 12:29:54 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:29:57.378972 | orchestrator | 2025-04-05 12:29:57 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:29:57.380626 | orchestrator | 2025-04-05 12:29:57 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:00.423409 | orchestrator | 2025-04-05 12:29:57 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:00.423562 | orchestrator | 2025-04-05 12:30:00 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:30:00.423840 | orchestrator | 2025-04-05 12:30:00 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:03.466106 | orchestrator | 2025-04-05 12:30:00 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:03.466245 | orchestrator | 2025-04-05 12:30:03 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:30:03.468539 | orchestrator | 2025-04-05 12:30:03 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:06.512005 | orchestrator | 2025-04-05 12:30:03 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:06.512139 | orchestrator | 2025-04-05 12:30:06 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:30:06.512490 | orchestrator | 2025-04-05 12:30:06 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:09.548150 | orchestrator | 2025-04-05 12:30:06 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:09.548275 | orchestrator | 2025-04-05 12:30:09 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:30:09.549056 | orchestrator | 2025-04-05 12:30:09 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:09.549423 | orchestrator | 2025-04-05 12:30:09 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:12.602974 | orchestrator | 2025-04-05 12:30:12 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:30:12.603507 | orchestrator | 2025-04-05 12:30:12 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:12.603862 | orchestrator | 2025-04-05 12:30:12 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:15.651087 | orchestrator | 2025-04-05 12:30:15 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:30:15.652998 | orchestrator | 2025-04-05 12:30:15 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:18.701535 | orchestrator | 2025-04-05 12:30:15 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:18.701666 | orchestrator | 2025-04-05 12:30:18 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state STARTED 2025-04-05 12:30:18.702138 | orchestrator | 2025-04-05 12:30:18 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:21.752279 | orchestrator | 2025-04-05 12:30:18 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:21.752416 | orchestrator | 2025-04-05 12:30:21 | INFO  | Task d997d420-a62b-4ee3-b509-bb456d5f75d6 is in state SUCCESS 2025-04-05 12:30:21.754442 | orchestrator | 2025-04-05 12:30:21.754618 | orchestrator | None 2025-04-05 12:30:21.754641 | orchestrator | 2025-04-05 12:30:21.754657 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:30:21.754750 | orchestrator | 2025-04-05 12:30:21.754813 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:30:21.754828 | orchestrator | Saturday 05 April 2025 12:24:00 +0000 (0:00:00.488) 0:00:00.488 ******** 2025-04-05 12:30:21.754843 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:30:21.754858 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:30:21.754873 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:30:21.754887 | orchestrator | 2025-04-05 12:30:21.755221 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:30:21.755241 | orchestrator | Saturday 05 April 2025 12:24:00 +0000 (0:00:00.347) 0:00:00.835 ******** 2025-04-05 12:30:21.755257 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-04-05 12:30:21.755271 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-04-05 12:30:21.755308 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-04-05 12:30:21.755401 | orchestrator | 2025-04-05 12:30:21.755418 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-04-05 12:30:21.755432 | orchestrator | 2025-04-05 12:30:21.755446 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-04-05 12:30:21.755460 | orchestrator | Saturday 05 April 2025 12:24:01 +0000 (0:00:00.677) 0:00:01.513 ******** 2025-04-05 12:30:21.755474 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.755489 | orchestrator | 2025-04-05 12:30:21.755503 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-04-05 12:30:21.755517 | orchestrator | Saturday 05 April 2025 12:24:02 +0000 (0:00:00.542) 0:00:02.055 ******** 2025-04-05 12:30:21.755531 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:30:21.755545 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:30:21.755559 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:30:21.755573 | orchestrator | 2025-04-05 12:30:21.755586 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-04-05 12:30:21.755600 | orchestrator | Saturday 05 April 2025 12:24:03 +0000 (0:00:01.079) 0:00:03.135 ******** 2025-04-05 12:30:21.755614 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.755627 | orchestrator | 2025-04-05 12:30:21.756025 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-04-05 12:30:21.756044 | orchestrator | Saturday 05 April 2025 12:24:03 +0000 (0:00:00.658) 0:00:03.793 ******** 2025-04-05 12:30:21.756058 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:30:21.756073 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:30:21.756087 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:30:21.756102 | orchestrator | 2025-04-05 12:30:21.756116 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-04-05 12:30:21.756130 | orchestrator | Saturday 05 April 2025 12:24:05 +0000 (0:00:01.408) 0:00:05.201 ******** 2025-04-05 12:30:21.756145 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-05 12:30:21.756160 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-05 12:30:21.756174 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-05 12:30:21.756189 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-05 12:30:21.756204 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-05 12:30:21.756260 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-05 12:30:21.756277 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-05 12:30:21.756291 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-05 12:30:21.756305 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-05 12:30:21.756319 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-05 12:30:21.756333 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-05 12:30:21.756347 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-05 12:30:21.756361 | orchestrator | 2025-04-05 12:30:21.756375 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-05 12:30:21.756388 | orchestrator | Saturday 05 April 2025 12:24:08 +0000 (0:00:03.236) 0:00:08.438 ******** 2025-04-05 12:30:21.756402 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-04-05 12:30:21.756417 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-04-05 12:30:21.756444 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-04-05 12:30:21.756458 | orchestrator | 2025-04-05 12:30:21.756472 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-05 12:30:21.756486 | orchestrator | Saturday 05 April 2025 12:24:09 +0000 (0:00:00.735) 0:00:09.173 ******** 2025-04-05 12:30:21.756952 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-04-05 12:30:21.756975 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-04-05 12:30:21.756990 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-04-05 12:30:21.757004 | orchestrator | 2025-04-05 12:30:21.757052 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-05 12:30:21.757069 | orchestrator | Saturday 05 April 2025 12:24:10 +0000 (0:00:01.573) 0:00:10.747 ******** 2025-04-05 12:30:21.757083 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-04-05 12:30:21.757098 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.757145 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-04-05 12:30:21.757162 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.757176 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-04-05 12:30:21.757190 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.757204 | orchestrator | 2025-04-05 12:30:21.757218 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-04-05 12:30:21.757232 | orchestrator | Saturday 05 April 2025 12:24:11 +0000 (0:00:00.687) 0:00:11.435 ******** 2025-04-05 12:30:21.757248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-05 12:30:21.757270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-05 12:30:21.757286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-05 12:30:21.757301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-05 12:30:21.757329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-05 12:30:21.757370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-05 12:30:21.757386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-05 12:30:21.757402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-05 12:30:21.757417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-05 12:30:21.757879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-05 12:30:21.757910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-05 12:30:21.757940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-05 12:30:21.757958 | orchestrator | 2025-04-05 12:30:21.757976 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-04-05 12:30:21.757991 | orchestrator | Saturday 05 April 2025 12:24:13 +0000 (0:00:02.218) 0:00:13.653 ******** 2025-04-05 12:30:21.758007 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.758054 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.758082 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.758695 | orchestrator | 2025-04-05 12:30:21.758714 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-04-05 12:30:21.758728 | orchestrator | Saturday 05 April 2025 12:24:15 +0000 (0:00:01.813) 0:00:15.467 ******** 2025-04-05 12:30:21.759177 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-04-05 12:30:21.759210 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-04-05 12:30:21.759266 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-04-05 12:30:21.759283 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-04-05 12:30:21.759297 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-04-05 12:30:21.759311 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-04-05 12:30:21.759325 | orchestrator | 2025-04-05 12:30:21.759339 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-04-05 12:30:21.759354 | orchestrator | Saturday 05 April 2025 12:24:17 +0000 (0:00:02.342) 0:00:17.809 ******** 2025-04-05 12:30:21.759367 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.759382 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.759395 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.759409 | orchestrator | 2025-04-05 12:30:21.759423 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-04-05 12:30:21.759437 | orchestrator | Saturday 05 April 2025 12:24:19 +0000 (0:00:01.739) 0:00:19.548 ******** 2025-04-05 12:30:21.759451 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:30:21.759465 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:30:21.759479 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:30:21.759493 | orchestrator | 2025-04-05 12:30:21.759507 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-04-05 12:30:21.759521 | orchestrator | Saturday 05 April 2025 12:24:21 +0000 (0:00:01.911) 0:00:21.460 ******** 2025-04-05 12:30:21.759600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-05 12:30:21.759619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-05 12:30:21.759647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-05 12:30:21.759662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-05 12:30:21.759677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-05 12:30:21.759702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-05 12:30:21.759812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-05 12:30:21.759830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-05 12:30:21.759854 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.759872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-05 12:30:21.759889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-05 12:30:21.759905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-05 12:30:21.759936 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.759965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-05 12:30:21.759982 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.759998 | orchestrator | 2025-04-05 12:30:21.760014 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-04-05 12:30:21.760029 | orchestrator | Saturday 05 April 2025 12:24:24 +0000 (0:00:03.002) 0:00:24.462 ******** 2025-04-05 12:30:21.760045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-05 12:30:21.760117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-05 12:30:21.760136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-05 12:30:21.760153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-05 12:30:21.760181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-05 12:30:21.760213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-05 12:30:21.760229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-05 12:30:21.760244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-05 12:30:21.760266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-05 12:30:21.760280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-05 12:30:21.760342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-05 12:30:21.760359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-05 12:30:21.760374 | orchestrator | 2025-04-05 12:30:21.760388 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-04-05 12:30:21.760402 | orchestrator | Saturday 05 April 2025 12:24:27 +0000 (0:00:03.429) 0:00:27.892 ******** 2025-04-05 12:30:21.760426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-05 12:30:21.760442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-05 12:30:21.760470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-05 12:30:21.760485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-05 12:30:21.760527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-05 12:30:21.760551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-05 12:30:21.760574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-05 12:30:21.760589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-05 12:30:21.760617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-05 12:30:21.760631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-05 12:30:21.760646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-05 12:30:21.760665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-05 12:30:21.760729 | orchestrator | 2025-04-05 12:30:21.760745 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-04-05 12:30:21.760815 | orchestrator | Saturday 05 April 2025 12:24:30 +0000 (0:00:03.040) 0:00:30.932 ******** 2025-04-05 12:30:21.760833 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-05 12:30:21.760848 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-05 12:30:21.760863 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-05 12:30:21.760876 | orchestrator | 2025-04-05 12:30:21.760890 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-04-05 12:30:21.760904 | orchestrator | Saturday 05 April 2025 12:24:34 +0000 (0:00:03.428) 0:00:34.361 ******** 2025-04-05 12:30:21.760918 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-05 12:30:21.760932 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-05 12:30:21.760953 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-05 12:30:21.760977 | orchestrator | 2025-04-05 12:30:21.760991 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-04-05 12:30:21.761005 | orchestrator | Saturday 05 April 2025 12:24:37 +0000 (0:00:03.366) 0:00:37.728 ******** 2025-04-05 12:30:21.761019 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.761033 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.761047 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.761061 | orchestrator | 2025-04-05 12:30:21.761075 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-04-05 12:30:21.761184 | orchestrator | Saturday 05 April 2025 12:24:39 +0000 (0:00:01.474) 0:00:39.202 ******** 2025-04-05 12:30:21.761199 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-05 12:30:21.761214 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-05 12:30:21.761228 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-05 12:30:21.761242 | orchestrator | 2025-04-05 12:30:21.761256 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-04-05 12:30:21.761270 | orchestrator | Saturday 05 April 2025 12:24:42 +0000 (0:00:03.135) 0:00:42.337 ******** 2025-04-05 12:30:21.761320 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-05 12:30:21.761337 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-05 12:30:21.761349 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-05 12:30:21.761362 | orchestrator | 2025-04-05 12:30:21.761374 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-04-05 12:30:21.761387 | orchestrator | Saturday 05 April 2025 12:24:44 +0000 (0:00:02.694) 0:00:45.032 ******** 2025-04-05 12:30:21.761485 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-04-05 12:30:21.761499 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-04-05 12:30:21.761511 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-04-05 12:30:21.761524 | orchestrator | 2025-04-05 12:30:21.761536 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-04-05 12:30:21.761548 | orchestrator | Saturday 05 April 2025 12:24:47 +0000 (0:00:02.216) 0:00:47.248 ******** 2025-04-05 12:30:21.761561 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-04-05 12:30:21.761573 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-04-05 12:30:21.761585 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-04-05 12:30:21.761597 | orchestrator | 2025-04-05 12:30:21.761609 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-04-05 12:30:21.761622 | orchestrator | Saturday 05 April 2025 12:24:48 +0000 (0:00:01.662) 0:00:48.911 ******** 2025-04-05 12:30:21.761642 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.761654 | orchestrator | 2025-04-05 12:30:21.761667 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-04-05 12:30:21.761679 | orchestrator | Saturday 05 April 2025 12:24:49 +0000 (0:00:00.539) 0:00:49.451 ******** 2025-04-05 12:30:21.761693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-05 12:30:21.761714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-05 12:30:21.761736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-05 12:30:21.761749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-05 12:30:21.761855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-05 12:30:21.761871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-05 12:30:21.761942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-05 12:30:21.761953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-05 12:30:21.761971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-05 12:30:21.761982 | orchestrator | 2025-04-05 12:30:21.761992 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-04-05 12:30:21.762003 | orchestrator | Saturday 05 April 2025 12:24:52 +0000 (0:00:03.376) 0:00:52.827 ******** 2025-04-05 12:30:21.762068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-05 12:30:21.762082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-05 12:30:21.762093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-05 12:30:21.762104 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.762115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-05 12:30:21.762125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-05 12:30:21.762142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-05 12:30:21.762153 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.762169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-05 12:30:21.762180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-05 12:30:21.762191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-05 12:30:21.762203 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.762215 | orchestrator | 2025-04-05 12:30:21.762226 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-04-05 12:30:21.762236 | orchestrator | Saturday 05 April 2025 12:24:53 +0000 (0:00:00.713) 0:00:53.541 ******** 2025-04-05 12:30:21.762247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-05 12:30:21.762257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-05 12:30:21.762273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-05 12:30:21.762283 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.762294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-05 12:30:21.762317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-05 12:30:21.762328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-05 12:30:21.762339 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.762349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-05 12:30:21.762360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-05 12:30:21.762377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-05 12:30:21.762387 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.762398 | orchestrator | 2025-04-05 12:30:21.762408 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-04-05 12:30:21.762418 | orchestrator | Saturday 05 April 2025 12:24:54 +0000 (0:00:01.109) 0:00:54.650 ******** 2025-04-05 12:30:21.762429 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-05 12:30:21.762439 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-05 12:30:21.762449 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-05 12:30:21.762459 | orchestrator | 2025-04-05 12:30:21.762470 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-04-05 12:30:21.762480 | orchestrator | Saturday 05 April 2025 12:24:56 +0000 (0:00:01.786) 0:00:56.437 ******** 2025-04-05 12:30:21.762490 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-05 12:30:21.762500 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-05 12:30:21.762515 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-05 12:30:21.762525 | orchestrator | 2025-04-05 12:30:21.762535 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-04-05 12:30:21.762545 | orchestrator | Saturday 05 April 2025 12:24:58 +0000 (0:00:02.023) 0:00:58.460 ******** 2025-04-05 12:30:21.762556 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-05 12:30:21.762570 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-05 12:30:21.762581 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-05 12:30:21.762591 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-05 12:30:21.762601 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.762611 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-05 12:30:21.762621 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.762693 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-05 12:30:21.762705 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.762716 | orchestrator | 2025-04-05 12:30:21.762726 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-04-05 12:30:21.762736 | orchestrator | Saturday 05 April 2025 12:24:59 +0000 (0:00:01.319) 0:00:59.780 ******** 2025-04-05 12:30:21.762747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-05 12:30:21.762779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-05 12:30:21.762790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-05 12:30:21.762801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-05 12:30:21.762811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-05 12:30:21.762828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-05 12:30:21.762839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-05 12:30:21.762855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-05 12:30:21.762865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-05 12:30:21.762876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-05 12:30:21.762887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-05 12:30:21.762897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed', '__omit_place_holder__9343f9bd1f55d6654640c11764f19529f1f501ed'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-05 12:30:21.762908 | orchestrator | 2025-04-05 12:30:21.762922 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-04-05 12:30:21.762933 | orchestrator | Saturday 05 April 2025 12:25:02 +0000 (0:00:02.326) 0:01:02.106 ******** 2025-04-05 12:30:21.762943 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.762953 | orchestrator | 2025-04-05 12:30:21.762963 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-04-05 12:30:21.762973 | orchestrator | Saturday 05 April 2025 12:25:02 +0000 (0:00:00.598) 0:01:02.705 ******** 2025-04-05 12:30:21.762984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-05 12:30:21.763001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-05 12:30:21.763012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-05 12:30:21.763022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.763034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-05 12:30:21.763050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.763061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.763077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.763087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-05 12:30:21.763098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-05 12:30:21.763109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.763119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.763129 | orchestrator | 2025-04-05 12:30:21.763140 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-04-05 12:30:21.763150 | orchestrator | Saturday 05 April 2025 12:25:06 +0000 (0:00:03.548) 0:01:06.253 ******** 2025-04-05 12:30:21.763165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-05 12:30:21.763181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-05 12:30:21.763217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.763228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-05 12:30:21.763239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.763250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-05 12:30:21.763260 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.763292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.763304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.763314 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.763325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-05 12:30:21.763335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-05 12:30:21.763346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.763356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.763367 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.763377 | orchestrator | 2025-04-05 12:30:21.763457 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-04-05 12:30:21.763474 | orchestrator | Saturday 05 April 2025 12:25:07 +0000 (0:00:00.875) 0:01:07.129 ******** 2025-04-05 12:30:21.763485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-05 12:30:21.763500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-05 12:30:21.763511 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.763522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-05 12:30:21.763532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-05 12:30:21.763542 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.763552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-05 12:30:21.763562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-05 12:30:21.763572 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.763583 | orchestrator | 2025-04-05 12:30:21.763593 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-04-05 12:30:21.763603 | orchestrator | Saturday 05 April 2025 12:25:08 +0000 (0:00:01.078) 0:01:08.207 ******** 2025-04-05 12:30:21.763613 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.763623 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.763633 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.763644 | orchestrator | 2025-04-05 12:30:21.763654 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-04-05 12:30:21.763664 | orchestrator | Saturday 05 April 2025 12:25:09 +0000 (0:00:01.369) 0:01:09.577 ******** 2025-04-05 12:30:21.763674 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.763684 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.763694 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.763704 | orchestrator | 2025-04-05 12:30:21.763715 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-04-05 12:30:21.763725 | orchestrator | Saturday 05 April 2025 12:25:11 +0000 (0:00:02.133) 0:01:11.710 ******** 2025-04-05 12:30:21.763734 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.763744 | orchestrator | 2025-04-05 12:30:21.763754 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-04-05 12:30:21.763781 | orchestrator | Saturday 05 April 2025 12:25:12 +0000 (0:00:01.183) 0:01:12.894 ******** 2025-04-05 12:30:21.763792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.763813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.763831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.763851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.763863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.763873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.763884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.763904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.763915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.763926 | orchestrator | 2025-04-05 12:30:21.763936 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-04-05 12:30:21.763946 | orchestrator | Saturday 05 April 2025 12:25:18 +0000 (0:00:06.130) 0:01:19.025 ******** 2025-04-05 12:30:21.763971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.763982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.763993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.764009 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.764020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.764036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.764047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.764057 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.764083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.764101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.764111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.764122 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.764132 | orchestrator | 2025-04-05 12:30:21.764143 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-04-05 12:30:21.764153 | orchestrator | Saturday 05 April 2025 12:25:19 +0000 (0:00:00.890) 0:01:19.916 ******** 2025-04-05 12:30:21.764163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-05 12:30:21.764178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-05 12:30:21.764190 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.764201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-05 12:30:21.764217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-05 12:30:21.764228 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.764238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-05 12:30:21.764248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-05 12:30:21.764258 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.764268 | orchestrator | 2025-04-05 12:30:21.764278 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-04-05 12:30:21.764288 | orchestrator | Saturday 05 April 2025 12:25:21 +0000 (0:00:01.128) 0:01:21.045 ******** 2025-04-05 12:30:21.764298 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.764308 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.764318 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.764328 | orchestrator | 2025-04-05 12:30:21.764338 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-04-05 12:30:21.764352 | orchestrator | Saturday 05 April 2025 12:25:22 +0000 (0:00:01.152) 0:01:22.197 ******** 2025-04-05 12:30:21.764362 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.764377 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.764387 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.764397 | orchestrator | 2025-04-05 12:30:21.764407 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-04-05 12:30:21.764417 | orchestrator | Saturday 05 April 2025 12:25:24 +0000 (0:00:02.300) 0:01:24.497 ******** 2025-04-05 12:30:21.764427 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.764437 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.764447 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.764457 | orchestrator | 2025-04-05 12:30:21.764467 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-04-05 12:30:21.764477 | orchestrator | Saturday 05 April 2025 12:25:24 +0000 (0:00:00.473) 0:01:24.971 ******** 2025-04-05 12:30:21.764487 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.764497 | orchestrator | 2025-04-05 12:30:21.764507 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-04-05 12:30:21.764517 | orchestrator | Saturday 05 April 2025 12:25:25 +0000 (0:00:00.615) 0:01:25.587 ******** 2025-04-05 12:30:21.764528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-05 12:30:21.764539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-05 12:30:21.764562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-05 12:30:21.764574 | orchestrator | 2025-04-05 12:30:21.764584 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-04-05 12:30:21.764594 | orchestrator | Saturday 05 April 2025 12:25:28 +0000 (0:00:02.715) 0:01:28.303 ******** 2025-04-05 12:30:21.764610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-05 12:30:21.764621 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.764631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-05 12:30:21.764642 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.764663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-05 12:30:21.764674 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.764684 | orchestrator | 2025-04-05 12:30:21.764694 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-04-05 12:30:21.764704 | orchestrator | Saturday 05 April 2025 12:25:30 +0000 (0:00:02.510) 0:01:30.813 ******** 2025-04-05 12:30:21.764720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-05 12:30:21.764731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-05 12:30:21.764743 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.764753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-05 12:30:21.764816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-05 12:30:21.764828 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.764838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-05 12:30:21.764848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-05 12:30:21.764859 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.764869 | orchestrator | 2025-04-05 12:30:21.764879 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-04-05 12:30:21.764889 | orchestrator | Saturday 05 April 2025 12:25:33 +0000 (0:00:02.886) 0:01:33.700 ******** 2025-04-05 12:30:21.764898 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.764908 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.764918 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.764928 | orchestrator | 2025-04-05 12:30:21.764939 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-04-05 12:30:21.764949 | orchestrator | Saturday 05 April 2025 12:25:34 +0000 (0:00:00.650) 0:01:34.350 ******** 2025-04-05 12:30:21.764959 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.764969 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.764979 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.765074 | orchestrator | 2025-04-05 12:30:21.765086 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-04-05 12:30:21.765106 | orchestrator | Saturday 05 April 2025 12:25:35 +0000 (0:00:01.177) 0:01:35.528 ******** 2025-04-05 12:30:21.765116 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.765126 | orchestrator | 2025-04-05 12:30:21.765136 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-04-05 12:30:21.765146 | orchestrator | Saturday 05 April 2025 12:25:36 +0000 (0:00:00.643) 0:01:36.171 ******** 2025-04-05 12:30:21.765163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.765181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.765190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.765282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765316 | orchestrator | 2025-04-05 12:30:21.765325 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-04-05 12:30:21.765334 | orchestrator | Saturday 05 April 2025 12:25:40 +0000 (0:00:04.558) 0:01:40.729 ******** 2025-04-05 12:30:21.765343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.765352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765385 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.765398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.765412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765445 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.765454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.765463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765499 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.765507 | orchestrator | 2025-04-05 12:30:21.765516 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-04-05 12:30:21.765525 | orchestrator | Saturday 05 April 2025 12:25:42 +0000 (0:00:01.682) 0:01:42.412 ******** 2025-04-05 12:30:21.765533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-05 12:30:21.765542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-05 12:30:21.765551 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.765560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-05 12:30:21.765585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-05 12:30:21.765595 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.765603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-05 12:30:21.765612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-05 12:30:21.765621 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.765636 | orchestrator | 2025-04-05 12:30:21.765645 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-04-05 12:30:21.765653 | orchestrator | Saturday 05 April 2025 12:25:43 +0000 (0:00:01.446) 0:01:43.859 ******** 2025-04-05 12:30:21.765662 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.765671 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.765679 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.765688 | orchestrator | 2025-04-05 12:30:21.765697 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-04-05 12:30:21.765706 | orchestrator | Saturday 05 April 2025 12:25:45 +0000 (0:00:01.238) 0:01:45.097 ******** 2025-04-05 12:30:21.765715 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.765723 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.765732 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.765740 | orchestrator | 2025-04-05 12:30:21.765749 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-04-05 12:30:21.765757 | orchestrator | Saturday 05 April 2025 12:25:47 +0000 (0:00:02.544) 0:01:47.642 ******** 2025-04-05 12:30:21.765778 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.765787 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.765796 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.765804 | orchestrator | 2025-04-05 12:30:21.765812 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-04-05 12:30:21.765821 | orchestrator | Saturday 05 April 2025 12:25:48 +0000 (0:00:00.508) 0:01:48.150 ******** 2025-04-05 12:30:21.765829 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.765838 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.765846 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.765854 | orchestrator | 2025-04-05 12:30:21.765863 | orchestrator | TASK [include_role : designate] ************************************************ 2025-04-05 12:30:21.765872 | orchestrator | Saturday 05 April 2025 12:25:48 +0000 (0:00:00.506) 0:01:48.656 ******** 2025-04-05 12:30:21.765884 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.765893 | orchestrator | 2025-04-05 12:30:21.765901 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-04-05 12:30:21.765909 | orchestrator | Saturday 05 April 2025 12:25:49 +0000 (0:00:00.803) 0:01:49.459 ******** 2025-04-05 12:30:21.765918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-05 12:30:21.765935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-05 12:30:21.765945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-05 12:30:21.765959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-05 12:30:21.765982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.765998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-05 12:30:21.766057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-05 12:30:21.766105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766193 | orchestrator | 2025-04-05 12:30:21.766209 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-04-05 12:30:21.766218 | orchestrator | Saturday 05 April 2025 12:25:53 +0000 (0:00:03.586) 0:01:53.046 ******** 2025-04-05 12:30:21.766227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-05 12:30:21.766236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-05 12:30:21.766245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766304 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.766313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-05 12:30:21.766322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-05 12:30:21.766337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766391 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.766400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-05 12:30:21.766415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-05 12:30:21.766429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.766478 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.766487 | orchestrator | 2025-04-05 12:30:21.766496 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-04-05 12:30:21.766505 | orchestrator | Saturday 05 April 2025 12:25:53 +0000 (0:00:00.790) 0:01:53.836 ******** 2025-04-05 12:30:21.766513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-05 12:30:21.766522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-05 12:30:21.766531 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.766540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-05 12:30:21.766548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-05 12:30:21.766557 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.766565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-05 12:30:21.766574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-05 12:30:21.766582 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.766591 | orchestrator | 2025-04-05 12:30:21.766603 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-04-05 12:30:21.766611 | orchestrator | Saturday 05 April 2025 12:25:54 +0000 (0:00:00.957) 0:01:54.794 ******** 2025-04-05 12:30:21.766624 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.766633 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.766641 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.766650 | orchestrator | 2025-04-05 12:30:21.766659 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-04-05 12:30:21.766667 | orchestrator | Saturday 05 April 2025 12:25:55 +0000 (0:00:00.935) 0:01:55.730 ******** 2025-04-05 12:30:21.766676 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.766684 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.766693 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.766701 | orchestrator | 2025-04-05 12:30:21.766710 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-04-05 12:30:21.766718 | orchestrator | Saturday 05 April 2025 12:25:57 +0000 (0:00:01.689) 0:01:57.419 ******** 2025-04-05 12:30:21.766726 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.766735 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.766743 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.766752 | orchestrator | 2025-04-05 12:30:21.766775 | orchestrator | TASK [include_role : glance] *************************************************** 2025-04-05 12:30:21.766784 | orchestrator | Saturday 05 April 2025 12:25:57 +0000 (0:00:00.329) 0:01:57.748 ******** 2025-04-05 12:30:21.766792 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.766801 | orchestrator | 2025-04-05 12:30:21.766809 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-04-05 12:30:21.766818 | orchestrator | Saturday 05 April 2025 12:25:58 +0000 (0:00:00.785) 0:01:58.534 ******** 2025-04-05 12:30:21.766833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-05 12:30:21.766854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-05 12:30:21.766869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-05 12:30:21.766888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-05 12:30:21.766902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-05 12:30:21.766912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-05 12:30:21.766931 | orchestrator | 2025-04-05 12:30:21.766940 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-04-05 12:30:21.766949 | orchestrator | Saturday 05 April 2025 12:26:02 +0000 (0:00:04.261) 0:02:02.796 ******** 2025-04-05 12:30:21.766963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-05 12:30:21.766982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-05 12:30:21.766997 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.767011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-05 12:30:21.767027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-05 12:30:21.767036 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.767050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-05 12:30:21.767065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-05 12:30:21.767080 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.767089 | orchestrator | 2025-04-05 12:30:21.767098 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-04-05 12:30:21.767106 | orchestrator | Saturday 05 April 2025 12:26:05 +0000 (0:00:02.631) 0:02:05.428 ******** 2025-04-05 12:30:21.767115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-05 12:30:21.767129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-05 12:30:21.767139 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.767152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-05 12:30:21.767162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-05 12:30:21.767170 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.767179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-05 12:30:21.767188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-05 12:30:21.767197 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.767206 | orchestrator | 2025-04-05 12:30:21.767215 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-04-05 12:30:21.767223 | orchestrator | Saturday 05 April 2025 12:26:09 +0000 (0:00:03.871) 0:02:09.299 ******** 2025-04-05 12:30:21.767232 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.767241 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.767251 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.767265 | orchestrator | 2025-04-05 12:30:21.767275 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-04-05 12:30:21.767283 | orchestrator | Saturday 05 April 2025 12:26:10 +0000 (0:00:01.295) 0:02:10.595 ******** 2025-04-05 12:30:21.767292 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.767302 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.767315 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.767324 | orchestrator | 2025-04-05 12:30:21.767333 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-04-05 12:30:21.767342 | orchestrator | Saturday 05 April 2025 12:26:12 +0000 (0:00:01.892) 0:02:12.487 ******** 2025-04-05 12:30:21.767350 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.767359 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.767368 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.767376 | orchestrator | 2025-04-05 12:30:21.767385 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-04-05 12:30:21.767393 | orchestrator | Saturday 05 April 2025 12:26:12 +0000 (0:00:00.311) 0:02:12.799 ******** 2025-04-05 12:30:21.767402 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.767410 | orchestrator | 2025-04-05 12:30:21.767422 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-04-05 12:30:21.767431 | orchestrator | Saturday 05 April 2025 12:26:13 +0000 (0:00:00.754) 0:02:13.553 ******** 2025-04-05 12:30:21.767439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-05 12:30:21.767461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-05 12:30:21.767471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-05 12:30:21.767480 | orchestrator | 2025-04-05 12:30:21.767489 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-04-05 12:30:21.767497 | orchestrator | Saturday 05 April 2025 12:26:17 +0000 (0:00:03.600) 0:02:17.153 ******** 2025-04-05 12:30:21.767507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-05 12:30:21.767526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-05 12:30:21.767536 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.767545 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.767554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-05 12:30:21.767562 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.767571 | orchestrator | 2025-04-05 12:30:21.767580 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-04-05 12:30:21.767588 | orchestrator | Saturday 05 April 2025 12:26:17 +0000 (0:00:00.536) 0:02:17.690 ******** 2025-04-05 12:30:21.767597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-05 12:30:21.767609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-05 12:30:21.767617 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.767626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-05 12:30:21.767638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-05 12:30:21.767647 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.767656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-05 12:30:21.767665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-05 12:30:21.767673 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.767682 | orchestrator | 2025-04-05 12:30:21.767691 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-04-05 12:30:21.767699 | orchestrator | Saturday 05 April 2025 12:26:18 +0000 (0:00:00.557) 0:02:18.248 ******** 2025-04-05 12:30:21.767708 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.767716 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.767725 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.767733 | orchestrator | 2025-04-05 12:30:21.767742 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-04-05 12:30:21.767754 | orchestrator | Saturday 05 April 2025 12:26:19 +0000 (0:00:01.458) 0:02:19.707 ******** 2025-04-05 12:30:21.767778 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.767787 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.767795 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.767804 | orchestrator | 2025-04-05 12:30:21.767812 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-04-05 12:30:21.767821 | orchestrator | Saturday 05 April 2025 12:26:21 +0000 (0:00:01.906) 0:02:21.614 ******** 2025-04-05 12:30:21.767829 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.767838 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.767846 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.767855 | orchestrator | 2025-04-05 12:30:21.767863 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-04-05 12:30:21.767872 | orchestrator | Saturday 05 April 2025 12:26:21 +0000 (0:00:00.291) 0:02:21.905 ******** 2025-04-05 12:30:21.767880 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.767889 | orchestrator | 2025-04-05 12:30:21.767897 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-04-05 12:30:21.767906 | orchestrator | Saturday 05 April 2025 12:26:22 +0000 (0:00:00.924) 0:02:22.830 ******** 2025-04-05 12:30:21.767915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-05 12:30:21.767937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-05 12:30:21.767964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-05 12:30:21.767974 | orchestrator | 2025-04-05 12:30:21.767983 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-04-05 12:30:21.767992 | orchestrator | Saturday 05 April 2025 12:26:26 +0000 (0:00:03.506) 0:02:26.336 ******** 2025-04-05 12:30:21.768015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-05 12:30:21.768031 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.768044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-05 12:30:21.768064 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.768074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-05 12:30:21.768083 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.768092 | orchestrator | 2025-04-05 12:30:21.768100 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-04-05 12:30:21.768109 | orchestrator | Saturday 05 April 2025 12:26:27 +0000 (0:00:01.019) 0:02:27.355 ******** 2025-04-05 12:30:21.768118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-05 12:30:21.768127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-05 12:30:21.768137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-05 12:30:21.768150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-05 12:30:21.768163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-05 12:30:21.768172 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.768185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-05 12:30:21.768195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-05 12:30:21.768204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-05 12:30:21.768213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-05 12:30:21.768222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-05 12:30:21.768235 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.768244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-05 12:30:21.768252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-05 12:30:21.768261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-05 12:30:21.768270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-05 12:30:21.768279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-05 12:30:21.768287 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.768296 | orchestrator | 2025-04-05 12:30:21.768305 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-04-05 12:30:21.768318 | orchestrator | Saturday 05 April 2025 12:26:28 +0000 (0:00:01.178) 0:02:28.534 ******** 2025-04-05 12:30:21.768326 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.768335 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.768343 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.768352 | orchestrator | 2025-04-05 12:30:21.768360 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-04-05 12:30:21.768369 | orchestrator | Saturday 05 April 2025 12:26:29 +0000 (0:00:01.115) 0:02:29.650 ******** 2025-04-05 12:30:21.768377 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.768386 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.768394 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.768403 | orchestrator | 2025-04-05 12:30:21.768415 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-04-05 12:30:21.768424 | orchestrator | Saturday 05 April 2025 12:26:31 +0000 (0:00:01.666) 0:02:31.316 ******** 2025-04-05 12:30:21.768432 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.768441 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.768449 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.768458 | orchestrator | 2025-04-05 12:30:21.768466 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-04-05 12:30:21.768475 | orchestrator | Saturday 05 April 2025 12:26:31 +0000 (0:00:00.365) 0:02:31.682 ******** 2025-04-05 12:30:21.768483 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.768492 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.768500 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.768509 | orchestrator | 2025-04-05 12:30:21.768517 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-04-05 12:30:21.768525 | orchestrator | Saturday 05 April 2025 12:26:31 +0000 (0:00:00.349) 0:02:32.032 ******** 2025-04-05 12:30:21.768534 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.768542 | orchestrator | 2025-04-05 12:30:21.768554 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-04-05 12:30:21.768563 | orchestrator | Saturday 05 April 2025 12:26:33 +0000 (0:00:01.138) 0:02:33.170 ******** 2025-04-05 12:30:21.768571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:30:21.768581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-05 12:30:21.768591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-05 12:30:21.768608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:30:21.768618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:30:21.768634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-05 12:30:21.768643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-05 12:30:21.768652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-05 12:30:21.768666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-05 12:30:21.768674 | orchestrator | 2025-04-05 12:30:21.768683 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-04-05 12:30:21.768692 | orchestrator | Saturday 05 April 2025 12:26:36 +0000 (0:00:03.694) 0:02:36.864 ******** 2025-04-05 12:30:21.768705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-04-05 12:30:21.768714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-05 12:30:21.768729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-05 12:30:21.768739 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.768748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-04-05 12:30:21.768775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-05 12:30:21.768789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-05 12:30:21.768799 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.768814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-04-05 12:30:21.768823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-05 12:30:21.768832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-05 12:30:21.768848 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.768856 | orchestrator | 2025-04-05 12:30:21.768865 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-04-05 12:30:21.768874 | orchestrator | Saturday 05 April 2025 12:26:37 +0000 (0:00:00.797) 0:02:37.662 ******** 2025-04-05 12:30:21.768883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-04-05 12:30:21.768894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-04-05 12:30:21.768903 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.768912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-04-05 12:30:21.768921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-04-05 12:30:21.768929 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.768938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-04-05 12:30:21.768951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-04-05 12:30:21.768960 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.768969 | orchestrator | 2025-04-05 12:30:21.768978 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-04-05 12:30:21.768987 | orchestrator | Saturday 05 April 2025 12:26:38 +0000 (0:00:01.040) 0:02:38.703 ******** 2025-04-05 12:30:21.768995 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.769004 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.769012 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.769021 | orchestrator | 2025-04-05 12:30:21.769029 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-04-05 12:30:21.769038 | orchestrator | Saturday 05 April 2025 12:26:39 +0000 (0:00:01.221) 0:02:39.924 ******** 2025-04-05 12:30:21.769046 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.769055 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.769067 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.769076 | orchestrator | 2025-04-05 12:30:21.769084 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-04-05 12:30:21.769093 | orchestrator | Saturday 05 April 2025 12:26:41 +0000 (0:00:02.046) 0:02:41.970 ******** 2025-04-05 12:30:21.769101 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.769110 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.769118 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.769127 | orchestrator | 2025-04-05 12:30:21.769135 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-04-05 12:30:21.769144 | orchestrator | Saturday 05 April 2025 12:26:42 +0000 (0:00:00.438) 0:02:42.408 ******** 2025-04-05 12:30:21.769157 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.769165 | orchestrator | 2025-04-05 12:30:21.769174 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-04-05 12:30:21.769183 | orchestrator | Saturday 05 April 2025 12:26:43 +0000 (0:00:00.930) 0:02:43.339 ******** 2025-04-05 12:30:21.769191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:30:21.769201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.769216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:30:21.769230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.769239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:30:21.769253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.769262 | orchestrator | 2025-04-05 12:30:21.769271 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-04-05 12:30:21.769280 | orchestrator | Saturday 05 April 2025 12:26:47 +0000 (0:00:03.697) 0:02:47.036 ******** 2025-04-05 12:30:21.769288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-05 12:30:21.769303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.769312 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.769326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-05 12:30:21.769340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.769349 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.769363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-05 12:30:21.769373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.769382 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.769390 | orchestrator | 2025-04-05 12:30:21.769399 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-04-05 12:30:21.769408 | orchestrator | Saturday 05 April 2025 12:26:47 +0000 (0:00:00.781) 0:02:47.818 ******** 2025-04-05 12:30:21.769420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-05 12:30:21.769429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-05 12:30:21.769438 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.769446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-05 12:30:21.769458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-05 12:30:21.769467 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.769480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-05 12:30:21.769489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-05 12:30:21.769497 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.769506 | orchestrator | 2025-04-05 12:30:21.769514 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-04-05 12:30:21.769523 | orchestrator | Saturday 05 April 2025 12:26:48 +0000 (0:00:01.169) 0:02:48.987 ******** 2025-04-05 12:30:21.769531 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.769540 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.769548 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.769557 | orchestrator | 2025-04-05 12:30:21.769565 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-04-05 12:30:21.769574 | orchestrator | Saturday 05 April 2025 12:26:50 +0000 (0:00:01.492) 0:02:50.479 ******** 2025-04-05 12:30:21.769582 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.769591 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.769599 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.769608 | orchestrator | 2025-04-05 12:30:21.769616 | orchestrator | TASK [include_role : manila] *************************************************** 2025-04-05 12:30:21.769625 | orchestrator | Saturday 05 April 2025 12:26:52 +0000 (0:00:02.002) 0:02:52.482 ******** 2025-04-05 12:30:21.769633 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.769642 | orchestrator | 2025-04-05 12:30:21.769650 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-04-05 12:30:21.769659 | orchestrator | Saturday 05 April 2025 12:26:53 +0000 (0:00:01.236) 0:02:53.718 ******** 2025-04-05 12:30:21.769668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-05 12:30:21.769677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.769686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-05 12:30:21.769703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.769720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.769729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.769738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.769747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.769801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-05 12:30:21.769823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.769960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.769974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.769983 | orchestrator | 2025-04-05 12:30:21.769993 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-04-05 12:30:21.770002 | orchestrator | Saturday 05 April 2025 12:26:58 +0000 (0:00:04.852) 0:02:58.571 ******** 2025-04-05 12:30:21.770011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-05 12:30:21.770042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.770051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.770084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.770094 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.770103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-05 12:30:21.770111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.770120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.770128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.770143 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.770151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-05 12:30:21.770168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.770177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.770185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.770194 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.770202 | orchestrator | 2025-04-05 12:30:21.770210 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-04-05 12:30:21.770218 | orchestrator | Saturday 05 April 2025 12:26:59 +0000 (0:00:00.958) 0:02:59.530 ******** 2025-04-05 12:30:21.770226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-05 12:30:21.770234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-05 12:30:21.770243 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.770251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-05 12:30:21.770259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-05 12:30:21.770267 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.770279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-05 12:30:21.770287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-05 12:30:21.770295 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.770303 | orchestrator | 2025-04-05 12:30:21.770315 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-04-05 12:30:21.770323 | orchestrator | Saturday 05 April 2025 12:27:00 +0000 (0:00:01.383) 0:03:00.914 ******** 2025-04-05 12:30:21.770331 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.770339 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.770347 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.770354 | orchestrator | 2025-04-05 12:30:21.770362 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-04-05 12:30:21.770370 | orchestrator | Saturday 05 April 2025 12:27:02 +0000 (0:00:01.311) 0:03:02.225 ******** 2025-04-05 12:30:21.770378 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.770386 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.770394 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.770401 | orchestrator | 2025-04-05 12:30:21.770409 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-04-05 12:30:21.770417 | orchestrator | Saturday 05 April 2025 12:27:04 +0000 (0:00:02.103) 0:03:04.329 ******** 2025-04-05 12:30:21.770425 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.770453 | orchestrator | 2025-04-05 12:30:21.770461 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-04-05 12:30:21.770469 | orchestrator | Saturday 05 April 2025 12:27:05 +0000 (0:00:01.273) 0:03:05.602 ******** 2025-04-05 12:30:21.770477 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-05 12:30:21.770485 | orchestrator | 2025-04-05 12:30:21.770493 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-04-05 12:30:21.770504 | orchestrator | Saturday 05 April 2025 12:27:08 +0000 (0:00:02.902) 0:03:08.505 ******** 2025-04-05 12:30:21.770519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-05 12:30:21.770533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-05 12:30:21.770541 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.770558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-05 12:30:21.770569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-05 12:30:21.770579 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.770588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-05 12:30:21.770610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-05 12:30:21.770620 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.770629 | orchestrator | 2025-04-05 12:30:21.770638 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-04-05 12:30:21.770647 | orchestrator | Saturday 05 April 2025 12:27:12 +0000 (0:00:03.569) 0:03:12.074 ******** 2025-04-05 12:30:21.770661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-05 12:30:21.770676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-05 12:30:21.770690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-05 12:30:21.770699 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.770713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-05 12:30:21.770722 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.770731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-05 12:30:21.770751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-05 12:30:21.770775 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.770784 | orchestrator | 2025-04-05 12:30:21.770793 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-04-05 12:30:21.770802 | orchestrator | Saturday 05 April 2025 12:27:14 +0000 (0:00:02.552) 0:03:14.627 ******** 2025-04-05 12:30:21.770811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-05 12:30:21.770820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-05 12:30:21.770829 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.770843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-05 12:30:21.770853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-05 12:30:21.770866 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.770876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-05 12:30:21.770885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-05 12:30:21.770894 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.770903 | orchestrator | 2025-04-05 12:30:21.770911 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-04-05 12:30:21.770921 | orchestrator | Saturday 05 April 2025 12:27:17 +0000 (0:00:02.804) 0:03:17.432 ******** 2025-04-05 12:30:21.770930 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.770938 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.770946 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.770954 | orchestrator | 2025-04-05 12:30:21.770962 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-04-05 12:30:21.770970 | orchestrator | Saturday 05 April 2025 12:27:19 +0000 (0:00:01.890) 0:03:19.323 ******** 2025-04-05 12:30:21.770978 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.770986 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.770994 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.771002 | orchestrator | 2025-04-05 12:30:21.771010 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-04-05 12:30:21.771018 | orchestrator | Saturday 05 April 2025 12:27:20 +0000 (0:00:01.353) 0:03:20.676 ******** 2025-04-05 12:30:21.771026 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.771034 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.771042 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.771049 | orchestrator | 2025-04-05 12:30:21.771057 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-04-05 12:30:21.771065 | orchestrator | Saturday 05 April 2025 12:27:21 +0000 (0:00:00.360) 0:03:21.036 ******** 2025-04-05 12:30:21.771073 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.771081 | orchestrator | 2025-04-05 12:30:21.771089 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-04-05 12:30:21.771097 | orchestrator | Saturday 05 April 2025 12:27:22 +0000 (0:00:01.077) 0:03:22.114 ******** 2025-04-05 12:30:21.771108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-05 12:30:21.771124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-05 12:30:21.771139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-05 12:30:21.771148 | orchestrator | 2025-04-05 12:30:21.771156 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-04-05 12:30:21.771164 | orchestrator | Saturday 05 April 2025 12:27:23 +0000 (0:00:01.556) 0:03:23.671 ******** 2025-04-05 12:30:21.771173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-05 12:30:21.771239 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.771248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-05 12:30:21.771257 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.771269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-05 12:30:21.771281 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.771290 | orchestrator | 2025-04-05 12:30:21.771298 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-04-05 12:30:21.771306 | orchestrator | Saturday 05 April 2025 12:27:23 +0000 (0:00:00.311) 0:03:23.982 ******** 2025-04-05 12:30:21.771314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-05 12:30:21.771322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-05 12:30:21.771331 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.771339 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.771350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-05 12:30:21.771358 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.771366 | orchestrator | 2025-04-05 12:30:21.771374 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-04-05 12:30:21.771382 | orchestrator | Saturday 05 April 2025 12:27:24 +0000 (0:00:00.776) 0:03:24.759 ******** 2025-04-05 12:30:21.771390 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.771398 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.771406 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.771414 | orchestrator | 2025-04-05 12:30:21.771422 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-04-05 12:30:21.771443 | orchestrator | Saturday 05 April 2025 12:27:25 +0000 (0:00:00.365) 0:03:25.124 ******** 2025-04-05 12:30:21.771452 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.771460 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.771468 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.771476 | orchestrator | 2025-04-05 12:30:21.771484 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-04-05 12:30:21.771492 | orchestrator | Saturday 05 April 2025 12:27:26 +0000 (0:00:01.278) 0:03:26.402 ******** 2025-04-05 12:30:21.771500 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.771507 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.771515 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.771523 | orchestrator | 2025-04-05 12:30:21.771531 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-04-05 12:30:21.771539 | orchestrator | Saturday 05 April 2025 12:27:26 +0000 (0:00:00.368) 0:03:26.771 ******** 2025-04-05 12:30:21.771547 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.771555 | orchestrator | 2025-04-05 12:30:21.771563 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-04-05 12:30:21.771571 | orchestrator | Saturday 05 April 2025 12:27:28 +0000 (0:00:01.330) 0:03:28.101 ******** 2025-04-05 12:30:21.771579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:30:21.771597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.771606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.771615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.771623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:30:21.771632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.771645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:30:21.771654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:30:21.771667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.771676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:30:21.771684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.771693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.771706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:30:21.771714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.771726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:30:21.771735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:30:21.771743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.771752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:30:21.771806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.771820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.771829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.771837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:30:21.771846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.771859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:30:21.771867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:30:21.771876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.771888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:30:21.771897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.771905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.771912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:30:21.771923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.771931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:30:21.771942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:30:21.771949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.771957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:30:21.771968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.771975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.771986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:30:21.772082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:30:21.772102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:30:21.772109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:30:21.772162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.772185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:30:21.772192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:30:21.772211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:30:21.772254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772264 | orchestrator | 2025-04-05 12:30:21.772272 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-04-05 12:30:21.772280 | orchestrator | Saturday 05 April 2025 12:27:32 +0000 (0:00:04.749) 0:03:32.851 ******** 2025-04-05 12:30:21.772287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:30:21.772301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:30:21.772368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:30:21.772388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:30:21.772395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:30:21.772403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:30:21.772462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.772497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:30:21.772556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:30:21.772567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:30:21.772599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:30:21.772606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:30:21.772649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:30:21.772659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772679 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.772686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:30:21.772694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.772717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:30:21.772774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:30:21.772798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:30:21.772806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772813 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.772831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:30:21.772883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:30:21.772922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.772929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:30:21.772972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:30:21.772996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.773004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:30:21.773012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.773019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.773026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:30:21.773034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.773066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:30:21.773076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:30:21.773092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.773100 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.773107 | orchestrator | 2025-04-05 12:30:21.773114 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-04-05 12:30:21.773121 | orchestrator | Saturday 05 April 2025 12:27:34 +0000 (0:00:01.534) 0:03:34.385 ******** 2025-04-05 12:30:21.773129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-05 12:30:21.773136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-05 12:30:21.773143 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.773154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-05 12:30:21.773161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-05 12:30:21.773168 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.773175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-05 12:30:21.773182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-05 12:30:21.773194 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.773201 | orchestrator | 2025-04-05 12:30:21.773208 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-04-05 12:30:21.773215 | orchestrator | Saturday 05 April 2025 12:27:36 +0000 (0:00:01.759) 0:03:36.145 ******** 2025-04-05 12:30:21.773222 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.773229 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.773236 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.773243 | orchestrator | 2025-04-05 12:30:21.773250 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-04-05 12:30:21.773257 | orchestrator | Saturday 05 April 2025 12:27:37 +0000 (0:00:01.315) 0:03:37.461 ******** 2025-04-05 12:30:21.773264 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.773271 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.773293 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.773302 | orchestrator | 2025-04-05 12:30:21.773310 | orchestrator | TASK [include_role : placement] ************************************************ 2025-04-05 12:30:21.773318 | orchestrator | Saturday 05 April 2025 12:27:39 +0000 (0:00:02.195) 0:03:39.656 ******** 2025-04-05 12:30:21.773325 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.773332 | orchestrator | 2025-04-05 12:30:21.773340 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-04-05 12:30:21.773347 | orchestrator | Saturday 05 April 2025 12:27:41 +0000 (0:00:01.438) 0:03:41.094 ******** 2025-04-05 12:30:21.773355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.773369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.773378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.773392 | orchestrator | 2025-04-05 12:30:21.773400 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-04-05 12:30:21.773407 | orchestrator | Saturday 05 April 2025 12:27:45 +0000 (0:00:04.040) 0:03:45.135 ******** 2025-04-05 12:30:21.773433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.773443 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.773451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.773459 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.773472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.773480 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.773488 | orchestrator | 2025-04-05 12:30:21.773496 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-04-05 12:30:21.773504 | orchestrator | Saturday 05 April 2025 12:27:45 +0000 (0:00:00.860) 0:03:45.996 ******** 2025-04-05 12:30:21.773512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-05 12:30:21.773524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-05 12:30:21.773532 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.773540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-05 12:30:21.773547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-05 12:30:21.773555 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.773562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-05 12:30:21.773570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-05 12:30:21.773578 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.773585 | orchestrator | 2025-04-05 12:30:21.773593 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-04-05 12:30:21.773601 | orchestrator | Saturday 05 April 2025 12:27:46 +0000 (0:00:00.868) 0:03:46.864 ******** 2025-04-05 12:30:21.773608 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.773617 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.773633 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.773641 | orchestrator | 2025-04-05 12:30:21.773649 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-04-05 12:30:21.773657 | orchestrator | Saturday 05 April 2025 12:27:48 +0000 (0:00:01.280) 0:03:48.144 ******** 2025-04-05 12:30:21.773680 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.773689 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.773698 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.773705 | orchestrator | 2025-04-05 12:30:21.773713 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-04-05 12:30:21.773721 | orchestrator | Saturday 05 April 2025 12:27:50 +0000 (0:00:02.191) 0:03:50.336 ******** 2025-04-05 12:30:21.773729 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.773737 | orchestrator | 2025-04-05 12:30:21.773744 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-04-05 12:30:21.773756 | orchestrator | Saturday 05 April 2025 12:27:51 +0000 (0:00:01.192) 0:03:51.528 ******** 2025-04-05 12:30:21.773778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.773792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.773801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.773830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.773840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.773848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.773860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.773867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.773874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.773881 | orchestrator | 2025-04-05 12:30:21.773889 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-04-05 12:30:21.773896 | orchestrator | Saturday 05 April 2025 12:27:55 +0000 (0:00:04.098) 0:03:55.627 ******** 2025-04-05 12:30:21.773925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.773934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.773945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.773953 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.773960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.773973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.773996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.774005 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.774013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.774046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.774054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.774061 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.774068 | orchestrator | 2025-04-05 12:30:21.774075 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-04-05 12:30:21.774082 | orchestrator | Saturday 05 April 2025 12:27:56 +0000 (0:00:00.860) 0:03:56.488 ******** 2025-04-05 12:30:21.774093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-05 12:30:21.774100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-05 12:30:21.774107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-05 12:30:21.774114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-05 12:30:21.774122 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.774146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-05 12:30:21.774154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-05 12:30:21.774161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-05 12:30:21.774171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-05 12:30:21.774184 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.774191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-05 12:30:21.774198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-05 12:30:21.774205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-05 12:30:21.774212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-05 12:30:21.774219 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.774226 | orchestrator | 2025-04-05 12:30:21.774233 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-04-05 12:30:21.774240 | orchestrator | Saturday 05 April 2025 12:27:57 +0000 (0:00:01.389) 0:03:57.877 ******** 2025-04-05 12:30:21.774247 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.774254 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.774261 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.774268 | orchestrator | 2025-04-05 12:30:21.774275 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-04-05 12:30:21.774282 | orchestrator | Saturday 05 April 2025 12:27:59 +0000 (0:00:01.357) 0:03:59.235 ******** 2025-04-05 12:30:21.774289 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.774296 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.774303 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.774310 | orchestrator | 2025-04-05 12:30:21.774317 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-04-05 12:30:21.774324 | orchestrator | Saturday 05 April 2025 12:28:01 +0000 (0:00:02.376) 0:04:01.611 ******** 2025-04-05 12:30:21.774331 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.774338 | orchestrator | 2025-04-05 12:30:21.774345 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-04-05 12:30:21.774352 | orchestrator | Saturday 05 April 2025 12:28:03 +0000 (0:00:01.539) 0:04:03.151 ******** 2025-04-05 12:30:21.774359 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-04-05 12:30:21.774367 | orchestrator | 2025-04-05 12:30:21.774374 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-04-05 12:30:21.774381 | orchestrator | Saturday 05 April 2025 12:28:04 +0000 (0:00:01.182) 0:04:04.333 ******** 2025-04-05 12:30:21.774388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-05 12:30:21.774403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-05 12:30:21.774431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-05 12:30:21.774440 | orchestrator | 2025-04-05 12:30:21.774447 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-04-05 12:30:21.774454 | orchestrator | Saturday 05 April 2025 12:28:09 +0000 (0:00:04.762) 0:04:09.096 ******** 2025-04-05 12:30:21.774461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-05 12:30:21.774469 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.774476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-05 12:30:21.774483 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.774490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-05 12:30:21.774498 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.774505 | orchestrator | 2025-04-05 12:30:21.774512 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-04-05 12:30:21.774519 | orchestrator | Saturday 05 April 2025 12:28:10 +0000 (0:00:01.319) 0:04:10.415 ******** 2025-04-05 12:30:21.774526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-05 12:30:21.774533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-05 12:30:21.774540 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.774547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-05 12:30:21.774556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-05 12:30:21.774568 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.774578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-05 12:30:21.774585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-05 12:30:21.774592 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.774600 | orchestrator | 2025-04-05 12:30:21.774622 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-05 12:30:21.774630 | orchestrator | Saturday 05 April 2025 12:28:12 +0000 (0:00:01.907) 0:04:12.323 ******** 2025-04-05 12:30:21.774637 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.774644 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.774651 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.774658 | orchestrator | 2025-04-05 12:30:21.774665 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-05 12:30:21.774672 | orchestrator | Saturday 05 April 2025 12:28:14 +0000 (0:00:02.579) 0:04:14.902 ******** 2025-04-05 12:30:21.774679 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.774686 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.774693 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.774700 | orchestrator | 2025-04-05 12:30:21.774707 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-04-05 12:30:21.774714 | orchestrator | Saturday 05 April 2025 12:28:18 +0000 (0:00:03.168) 0:04:18.071 ******** 2025-04-05 12:30:21.774721 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-04-05 12:30:21.774728 | orchestrator | 2025-04-05 12:30:21.774735 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-04-05 12:30:21.774742 | orchestrator | Saturday 05 April 2025 12:28:19 +0000 (0:00:01.330) 0:04:19.401 ******** 2025-04-05 12:30:21.774749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-05 12:30:21.774757 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.774776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-05 12:30:21.774783 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.774790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-05 12:30:21.774802 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.774809 | orchestrator | 2025-04-05 12:30:21.774816 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-04-05 12:30:21.774823 | orchestrator | Saturday 05 April 2025 12:28:21 +0000 (0:00:01.687) 0:04:21.089 ******** 2025-04-05 12:30:21.774836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-05 12:30:21.774844 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.774851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-05 12:30:21.774858 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.774883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-05 12:30:21.774891 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.774899 | orchestrator | 2025-04-05 12:30:21.774906 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-04-05 12:30:21.774913 | orchestrator | Saturday 05 April 2025 12:28:22 +0000 (0:00:01.671) 0:04:22.761 ******** 2025-04-05 12:30:21.774920 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.774927 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.774933 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.774940 | orchestrator | 2025-04-05 12:30:21.774947 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-05 12:30:21.774954 | orchestrator | Saturday 05 April 2025 12:28:24 +0000 (0:00:01.730) 0:04:24.492 ******** 2025-04-05 12:30:21.774961 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:30:21.774968 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:30:21.774975 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:30:21.774982 | orchestrator | 2025-04-05 12:30:21.774992 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-05 12:30:21.774999 | orchestrator | Saturday 05 April 2025 12:28:27 +0000 (0:00:02.632) 0:04:27.124 ******** 2025-04-05 12:30:21.775006 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:30:21.775013 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:30:21.775020 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:30:21.775028 | orchestrator | 2025-04-05 12:30:21.775035 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-04-05 12:30:21.775042 | orchestrator | Saturday 05 April 2025 12:28:30 +0000 (0:00:03.192) 0:04:30.317 ******** 2025-04-05 12:30:21.775049 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-04-05 12:30:21.775060 | orchestrator | 2025-04-05 12:30:21.775067 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-04-05 12:30:21.775074 | orchestrator | Saturday 05 April 2025 12:28:31 +0000 (0:00:01.198) 0:04:31.516 ******** 2025-04-05 12:30:21.775081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-05 12:30:21.775089 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.775096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-05 12:30:21.775103 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.775111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-05 12:30:21.775118 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.775125 | orchestrator | 2025-04-05 12:30:21.775132 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-04-05 12:30:21.775139 | orchestrator | Saturday 05 April 2025 12:28:33 +0000 (0:00:01.750) 0:04:33.266 ******** 2025-04-05 12:30:21.775167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-05 12:30:21.775176 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.775183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-05 12:30:21.775191 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.775198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-05 12:30:21.775209 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.775216 | orchestrator | 2025-04-05 12:30:21.775223 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-04-05 12:30:21.775230 | orchestrator | Saturday 05 April 2025 12:28:34 +0000 (0:00:01.670) 0:04:34.937 ******** 2025-04-05 12:30:21.775237 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.775244 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.775251 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.775258 | orchestrator | 2025-04-05 12:30:21.775266 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-05 12:30:21.775273 | orchestrator | Saturday 05 April 2025 12:28:36 +0000 (0:00:01.916) 0:04:36.853 ******** 2025-04-05 12:30:21.775280 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:30:21.775287 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:30:21.775294 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:30:21.775301 | orchestrator | 2025-04-05 12:30:21.775308 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-05 12:30:21.775315 | orchestrator | Saturday 05 April 2025 12:28:39 +0000 (0:00:03.085) 0:04:39.938 ******** 2025-04-05 12:30:21.775322 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:30:21.775330 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:30:21.775336 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:30:21.775343 | orchestrator | 2025-04-05 12:30:21.775351 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-04-05 12:30:21.775358 | orchestrator | Saturday 05 April 2025 12:28:43 +0000 (0:00:03.591) 0:04:43.529 ******** 2025-04-05 12:30:21.775365 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.775372 | orchestrator | 2025-04-05 12:30:21.775379 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-04-05 12:30:21.775386 | orchestrator | Saturday 05 April 2025 12:28:45 +0000 (0:00:01.531) 0:04:45.061 ******** 2025-04-05 12:30:21.775393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.775400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-05 12:30:21.775424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-05 12:30:21.775442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-05 12:30:21.775450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.775457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.775465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-05 12:30:21.775477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-05 12:30:21.775500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-05 12:30:21.775512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.775520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.775528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-05 12:30:21.775535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-05 12:30:21.775542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-05 12:30:21.775555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.775568 | orchestrator | 2025-04-05 12:30:21.775590 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-04-05 12:30:21.775599 | orchestrator | Saturday 05 April 2025 12:28:48 +0000 (0:00:03.522) 0:04:48.584 ******** 2025-04-05 12:30:21.775606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.775613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-05 12:30:21.775621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-05 12:30:21.775628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-05 12:30:21.775635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.775648 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.775671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.775684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-05 12:30:21.775691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-05 12:30:21.775699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-05 12:30:21.775706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.775713 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.775726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.775737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-05 12:30:21.775798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-05 12:30:21.775809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-05 12:30:21.775817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:30:21.775824 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.775831 | orchestrator | 2025-04-05 12:30:21.775839 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-04-05 12:30:21.775846 | orchestrator | Saturday 05 April 2025 12:28:49 +0000 (0:00:00.955) 0:04:49.539 ******** 2025-04-05 12:30:21.775853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-05 12:30:21.775860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-05 12:30:21.775868 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.775875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-05 12:30:21.775882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-05 12:30:21.775890 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.775897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-05 12:30:21.775909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-05 12:30:21.775916 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.775923 | orchestrator | 2025-04-05 12:30:21.775930 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-04-05 12:30:21.775937 | orchestrator | Saturday 05 April 2025 12:28:50 +0000 (0:00:01.227) 0:04:50.766 ******** 2025-04-05 12:30:21.775944 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.775951 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.775958 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.775965 | orchestrator | 2025-04-05 12:30:21.775972 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-04-05 12:30:21.775980 | orchestrator | Saturday 05 April 2025 12:28:51 +0000 (0:00:01.070) 0:04:51.836 ******** 2025-04-05 12:30:21.775987 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.775994 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.776001 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.776008 | orchestrator | 2025-04-05 12:30:21.776015 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-04-05 12:30:21.776022 | orchestrator | Saturday 05 April 2025 12:28:53 +0000 (0:00:01.973) 0:04:53.809 ******** 2025-04-05 12:30:21.776046 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.776054 | orchestrator | 2025-04-05 12:30:21.776061 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-04-05 12:30:21.776068 | orchestrator | Saturday 05 April 2025 12:28:54 +0000 (0:00:01.186) 0:04:54.996 ******** 2025-04-05 12:30:21.776082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-05 12:30:21.776090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-05 12:30:21.776098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-05 12:30:21.776119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-05 12:30:21.776148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-05 12:30:21.776158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-05 12:30:21.776165 | orchestrator | 2025-04-05 12:30:21.776171 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-04-05 12:30:21.776178 | orchestrator | Saturday 05 April 2025 12:29:00 +0000 (0:00:05.242) 0:05:00.239 ******** 2025-04-05 12:30:21.776184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-05 12:30:21.776195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-05 12:30:21.776207 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.776230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-05 12:30:21.776238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-05 12:30:21.776244 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.776251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-05 12:30:21.776261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-05 12:30:21.776268 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.776278 | orchestrator | 2025-04-05 12:30:21.776285 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-04-05 12:30:21.776291 | orchestrator | Saturday 05 April 2025 12:29:00 +0000 (0:00:00.694) 0:05:00.934 ******** 2025-04-05 12:30:21.776298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-05 12:30:21.776317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-05 12:30:21.776325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-05 12:30:21.776331 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.776338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-05 12:30:21.776344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-05 12:30:21.776351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-05 12:30:21.776357 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.776364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-05 12:30:21.776370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-05 12:30:21.776380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-05 12:30:21.776386 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.776392 | orchestrator | 2025-04-05 12:30:21.776399 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-04-05 12:30:21.776405 | orchestrator | Saturday 05 April 2025 12:29:01 +0000 (0:00:01.032) 0:05:01.966 ******** 2025-04-05 12:30:21.776411 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.776417 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.776423 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.776430 | orchestrator | 2025-04-05 12:30:21.776436 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-04-05 12:30:21.776442 | orchestrator | Saturday 05 April 2025 12:29:02 +0000 (0:00:00.682) 0:05:02.649 ******** 2025-04-05 12:30:21.776448 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.776454 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.776461 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.776467 | orchestrator | 2025-04-05 12:30:21.776473 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-04-05 12:30:21.776479 | orchestrator | Saturday 05 April 2025 12:29:03 +0000 (0:00:01.027) 0:05:03.677 ******** 2025-04-05 12:30:21.776485 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.776491 | orchestrator | 2025-04-05 12:30:21.776498 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-04-05 12:30:21.776504 | orchestrator | Saturday 05 April 2025 12:29:05 +0000 (0:00:01.635) 0:05:05.313 ******** 2025-04-05 12:30:21.776511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-05 12:30:21.776531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:30:21.776539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:30:21.776570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-05 12:30:21.776577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:30:21.776583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:30:21.776623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-05 12:30:21.776633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:30:21.776640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:30:21.776674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-05 12:30:21.776682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:30:21.776692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:30:21.776717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-05 12:30:21.776738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:30:21.776748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:30:21.776779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-05 12:30:21.776801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:30:21.776811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:30:21.776831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776837 | orchestrator | 2025-04-05 12:30:21.776844 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-04-05 12:30:21.776850 | orchestrator | Saturday 05 April 2025 12:29:09 +0000 (0:00:04.635) 0:05:09.948 ******** 2025-04-05 12:30:21.776857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:30:21.776863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:30:21.776878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:30:21.776903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:30:21.776909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:30:21.776916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:30:21.776946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:30:21.776953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:30:21.776960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776973 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.776979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.776993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:30:21.777000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:30:21.777012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:30:21.777018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.777025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.777031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:30:21.777041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.777050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:30:21.777057 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.777063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:30:21.777070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.777081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.777088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:30:21.777094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:30:21.777107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:30:21.777114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.777120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.777131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:30:21.777138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:30:21.777144 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.777151 | orchestrator | 2025-04-05 12:30:21.777157 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-04-05 12:30:21.777163 | orchestrator | Saturday 05 April 2025 12:29:11 +0000 (0:00:01.357) 0:05:11.305 ******** 2025-04-05 12:30:21.777170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-05 12:30:21.777179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-05 12:30:21.777186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-05 12:30:21.777192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-05 12:30:21.777199 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.777205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-05 12:30:21.777214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-05 12:30:21.777220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-05 12:30:21.777227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-05 12:30:21.777233 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.777240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-05 12:30:21.777246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-05 12:30:21.777252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-05 12:30:21.777259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-05 12:30:21.777265 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.777271 | orchestrator | 2025-04-05 12:30:21.777278 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-04-05 12:30:21.777284 | orchestrator | Saturday 05 April 2025 12:29:12 +0000 (0:00:01.442) 0:05:12.748 ******** 2025-04-05 12:30:21.777290 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.777296 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.777302 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.777308 | orchestrator | 2025-04-05 12:30:21.777315 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-04-05 12:30:21.777324 | orchestrator | Saturday 05 April 2025 12:29:13 +0000 (0:00:00.669) 0:05:13.418 ******** 2025-04-05 12:30:21.777330 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.777336 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.777342 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.777349 | orchestrator | 2025-04-05 12:30:21.777355 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-04-05 12:30:21.777361 | orchestrator | Saturday 05 April 2025 12:29:15 +0000 (0:00:01.811) 0:05:15.229 ******** 2025-04-05 12:30:21.777367 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.777373 | orchestrator | 2025-04-05 12:30:21.777380 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-04-05 12:30:21.777386 | orchestrator | Saturday 05 April 2025 12:29:16 +0000 (0:00:01.667) 0:05:16.897 ******** 2025-04-05 12:30:21.777392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-05 12:30:21.777405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-05 12:30:21.777416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-05 12:30:21.777423 | orchestrator | 2025-04-05 12:30:21.777429 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-04-05 12:30:21.777439 | orchestrator | Saturday 05 April 2025 12:29:19 +0000 (0:00:02.298) 0:05:19.195 ******** 2025-04-05 12:30:21.777446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-05 12:30:21.777453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-05 12:30:21.777459 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.777465 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.777474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-05 12:30:21.777486 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.777492 | orchestrator | 2025-04-05 12:30:21.777499 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-04-05 12:30:21.777505 | orchestrator | Saturday 05 April 2025 12:29:19 +0000 (0:00:00.664) 0:05:19.860 ******** 2025-04-05 12:30:21.777511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-05 12:30:21.777518 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.777524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-05 12:30:21.777534 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.777540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-05 12:30:21.777546 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.777553 | orchestrator | 2025-04-05 12:30:21.777559 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-04-05 12:30:21.777565 | orchestrator | Saturday 05 April 2025 12:29:20 +0000 (0:00:00.770) 0:05:20.630 ******** 2025-04-05 12:30:21.777571 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.777577 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.777583 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.777590 | orchestrator | 2025-04-05 12:30:21.777596 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-04-05 12:30:21.777602 | orchestrator | Saturday 05 April 2025 12:29:21 +0000 (0:00:00.689) 0:05:21.319 ******** 2025-04-05 12:30:21.777608 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.777614 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.777621 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.777627 | orchestrator | 2025-04-05 12:30:21.777633 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-04-05 12:30:21.777640 | orchestrator | Saturday 05 April 2025 12:29:22 +0000 (0:00:01.637) 0:05:22.957 ******** 2025-04-05 12:30:21.777646 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:30:21.777652 | orchestrator | 2025-04-05 12:30:21.777658 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-04-05 12:30:21.777664 | orchestrator | Saturday 05 April 2025 12:29:24 +0000 (0:00:01.725) 0:05:24.682 ******** 2025-04-05 12:30:21.777671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.777680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.777687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.777697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.777704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.777715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-05 12:30:21.777722 | orchestrator | 2025-04-05 12:30:21.777728 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-04-05 12:30:21.777737 | orchestrator | Saturday 05 April 2025 12:29:31 +0000 (0:00:06.728) 0:05:31.410 ******** 2025-04-05 12:30:21.777743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.777753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.777771 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.777778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.777791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.777798 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.777807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.777817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-05 12:30:21.777824 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.777830 | orchestrator | 2025-04-05 12:30:21.777837 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-04-05 12:30:21.777843 | orchestrator | Saturday 05 April 2025 12:29:32 +0000 (0:00:00.686) 0:05:32.097 ******** 2025-04-05 12:30:21.777849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-05 12:30:21.777856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-05 12:30:21.777862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-05 12:30:21.777869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-05 12:30:21.777875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-05 12:30:21.777881 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.777888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-05 12:30:21.777894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-05 12:30:21.777900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-05 12:30:21.777907 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.777913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-05 12:30:21.777927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-05 12:30:21.777937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-05 12:30:21.777945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-05 12:30:21.777952 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.777958 | orchestrator | 2025-04-05 12:30:21.777965 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-04-05 12:30:21.777971 | orchestrator | Saturday 05 April 2025 12:29:33 +0000 (0:00:01.131) 0:05:33.229 ******** 2025-04-05 12:30:21.777977 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.777983 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.777990 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.777996 | orchestrator | 2025-04-05 12:30:21.778002 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-04-05 12:30:21.778008 | orchestrator | Saturday 05 April 2025 12:29:34 +0000 (0:00:01.083) 0:05:34.313 ******** 2025-04-05 12:30:21.778034 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.778042 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.778048 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.778055 | orchestrator | 2025-04-05 12:30:21.778061 | orchestrator | TASK [include_role : swift] **************************************************** 2025-04-05 12:30:21.778067 | orchestrator | Saturday 05 April 2025 12:29:36 +0000 (0:00:01.955) 0:05:36.269 ******** 2025-04-05 12:30:21.778073 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.778080 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.778086 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.778092 | orchestrator | 2025-04-05 12:30:21.778098 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-04-05 12:30:21.778105 | orchestrator | Saturday 05 April 2025 12:29:36 +0000 (0:00:00.403) 0:05:36.672 ******** 2025-04-05 12:30:21.778111 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.778117 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.778127 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.778133 | orchestrator | 2025-04-05 12:30:21.778139 | orchestrator | TASK [include_role : trove] **************************************************** 2025-04-05 12:30:21.778146 | orchestrator | Saturday 05 April 2025 12:29:37 +0000 (0:00:00.394) 0:05:37.067 ******** 2025-04-05 12:30:21.778152 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.778158 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.778164 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.778171 | orchestrator | 2025-04-05 12:30:21.778177 | orchestrator | TASK [include_role : venus] **************************************************** 2025-04-05 12:30:21.778183 | orchestrator | Saturday 05 April 2025 12:29:37 +0000 (0:00:00.245) 0:05:37.312 ******** 2025-04-05 12:30:21.778189 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.778196 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.778202 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.778208 | orchestrator | 2025-04-05 12:30:21.778215 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-04-05 12:30:21.778221 | orchestrator | Saturday 05 April 2025 12:29:37 +0000 (0:00:00.391) 0:05:37.704 ******** 2025-04-05 12:30:21.778227 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.778233 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.778239 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.778246 | orchestrator | 2025-04-05 12:30:21.778252 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-04-05 12:30:21.778263 | orchestrator | Saturday 05 April 2025 12:29:38 +0000 (0:00:00.430) 0:05:38.134 ******** 2025-04-05 12:30:21.778269 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.778276 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.778282 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.778288 | orchestrator | 2025-04-05 12:30:21.778294 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-04-05 12:30:21.778300 | orchestrator | Saturday 05 April 2025 12:29:38 +0000 (0:00:00.530) 0:05:38.665 ******** 2025-04-05 12:30:21.778307 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:30:21.778313 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:30:21.778319 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:30:21.778325 | orchestrator | 2025-04-05 12:30:21.778332 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-04-05 12:30:21.778338 | orchestrator | Saturday 05 April 2025 12:29:39 +0000 (0:00:00.728) 0:05:39.394 ******** 2025-04-05 12:30:21.778344 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:30:21.778350 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:30:21.778357 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:30:21.778363 | orchestrator | 2025-04-05 12:30:21.778369 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-04-05 12:30:21.778375 | orchestrator | Saturday 05 April 2025 12:29:39 +0000 (0:00:00.399) 0:05:39.793 ******** 2025-04-05 12:30:21.778382 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:30:21.778388 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:30:21.778394 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:30:21.778400 | orchestrator | 2025-04-05 12:30:21.778407 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-04-05 12:30:21.778415 | orchestrator | Saturday 05 April 2025 12:29:40 +0000 (0:00:00.908) 0:05:40.701 ******** 2025-04-05 12:30:21.778422 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:30:21.778428 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:30:21.778434 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:30:21.778440 | orchestrator | 2025-04-05 12:30:21.778447 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-04-05 12:30:21.778453 | orchestrator | Saturday 05 April 2025 12:29:41 +0000 (0:00:01.034) 0:05:41.736 ******** 2025-04-05 12:30:21.778459 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:30:21.778465 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:30:21.778472 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:30:21.778478 | orchestrator | 2025-04-05 12:30:21.778487 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-04-05 12:30:21.778493 | orchestrator | Saturday 05 April 2025 12:29:42 +0000 (0:00:01.229) 0:05:42.965 ******** 2025-04-05 12:30:21.778500 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.778506 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.778512 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.778519 | orchestrator | 2025-04-05 12:30:21.778525 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-04-05 12:30:21.778531 | orchestrator | Saturday 05 April 2025 12:29:51 +0000 (0:00:08.528) 0:05:51.494 ******** 2025-04-05 12:30:21.778537 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:30:21.778544 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:30:21.778550 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:30:21.778556 | orchestrator | 2025-04-05 12:30:21.778562 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-04-05 12:30:21.778569 | orchestrator | Saturday 05 April 2025 12:29:52 +0000 (0:00:00.684) 0:05:52.178 ******** 2025-04-05 12:30:21.778575 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.778581 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.778588 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.778594 | orchestrator | 2025-04-05 12:30:21.778600 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-04-05 12:30:21.778606 | orchestrator | Saturday 05 April 2025 12:30:03 +0000 (0:00:11.132) 0:06:03.311 ******** 2025-04-05 12:30:21.778618 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:30:21.778625 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:30:21.778631 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:30:21.778637 | orchestrator | 2025-04-05 12:30:21.778643 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-04-05 12:30:21.778650 | orchestrator | Saturday 05 April 2025 12:30:04 +0000 (0:00:00.992) 0:06:04.304 ******** 2025-04-05 12:30:21.778656 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:30:21.778662 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:30:21.778668 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:30:21.778674 | orchestrator | 2025-04-05 12:30:21.778681 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-04-05 12:30:21.778687 | orchestrator | Saturday 05 April 2025 12:30:13 +0000 (0:00:09.322) 0:06:13.627 ******** 2025-04-05 12:30:21.778694 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.778700 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.778706 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.778712 | orchestrator | 2025-04-05 12:30:21.778718 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-04-05 12:30:21.778725 | orchestrator | Saturday 05 April 2025 12:30:14 +0000 (0:00:00.578) 0:06:14.206 ******** 2025-04-05 12:30:21.778731 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.778737 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.778743 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.778749 | orchestrator | 2025-04-05 12:30:21.778756 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-04-05 12:30:21.778772 | orchestrator | Saturday 05 April 2025 12:30:14 +0000 (0:00:00.574) 0:06:14.780 ******** 2025-04-05 12:30:21.778778 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.778784 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.778794 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.778800 | orchestrator | 2025-04-05 12:30:21.778806 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-04-05 12:30:21.778813 | orchestrator | Saturday 05 April 2025 12:30:15 +0000 (0:00:00.558) 0:06:15.339 ******** 2025-04-05 12:30:21.778819 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.778825 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.778831 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.778838 | orchestrator | 2025-04-05 12:30:21.778844 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-04-05 12:30:21.778850 | orchestrator | Saturday 05 April 2025 12:30:15 +0000 (0:00:00.325) 0:06:15.665 ******** 2025-04-05 12:30:21.778856 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.778863 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.778869 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.778875 | orchestrator | 2025-04-05 12:30:21.778881 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-04-05 12:30:21.778888 | orchestrator | Saturday 05 April 2025 12:30:16 +0000 (0:00:00.602) 0:06:16.268 ******** 2025-04-05 12:30:21.778894 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:30:21.778900 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:30:21.778906 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:30:21.778913 | orchestrator | 2025-04-05 12:30:21.778919 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-04-05 12:30:21.778928 | orchestrator | Saturday 05 April 2025 12:30:16 +0000 (0:00:00.544) 0:06:16.813 ******** 2025-04-05 12:30:21.778938 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:30:21.778948 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:30:21.778956 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:30:21.778963 | orchestrator | 2025-04-05 12:30:21.778969 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-04-05 12:30:21.778975 | orchestrator | Saturday 05 April 2025 12:30:17 +0000 (0:00:01.172) 0:06:17.985 ******** 2025-04-05 12:30:21.778982 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:30:21.778992 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:30:21.778998 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:30:21.779005 | orchestrator | 2025-04-05 12:30:21.779011 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:30:21.779017 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=91  rescued=0 ignored=0 2025-04-05 12:30:21.779024 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=91  rescued=0 ignored=0 2025-04-05 12:30:21.779030 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=91  rescued=0 ignored=0 2025-04-05 12:30:21.779037 | orchestrator | 2025-04-05 12:30:21.779043 | orchestrator | 2025-04-05 12:30:21.779052 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:30:24.806206 | orchestrator | Saturday 05 April 2025 12:30:19 +0000 (0:00:01.135) 0:06:19.121 ******** 2025-04-05 12:30:24.806343 | orchestrator | =============================================================================== 2025-04-05 12:30:24.806361 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 11.13s 2025-04-05 12:30:24.806376 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.32s 2025-04-05 12:30:24.806391 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.53s 2025-04-05 12:30:24.806405 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.73s 2025-04-05 12:30:24.806419 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.13s 2025-04-05 12:30:24.806433 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.24s 2025-04-05 12:30:24.806447 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.85s 2025-04-05 12:30:24.806460 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.76s 2025-04-05 12:30:24.806474 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.75s 2025-04-05 12:30:24.806488 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.64s 2025-04-05 12:30:24.806502 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.56s 2025-04-05 12:30:24.806516 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.26s 2025-04-05 12:30:24.806530 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.10s 2025-04-05 12:30:24.806543 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 4.04s 2025-04-05 12:30:24.806557 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.87s 2025-04-05 12:30:24.806571 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.70s 2025-04-05 12:30:24.806585 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.69s 2025-04-05 12:30:24.806599 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 3.60s 2025-04-05 12:30:24.806613 | orchestrator | proxysql-config : Copying over nova-cell ProxySQL rules config ---------- 3.59s 2025-04-05 12:30:24.806626 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.59s 2025-04-05 12:30:24.806641 | orchestrator | 2025-04-05 12:30:21 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:30:24.806656 | orchestrator | 2025-04-05 12:30:21 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:24.806670 | orchestrator | 2025-04-05 12:30:21 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:30:24.806684 | orchestrator | 2025-04-05 12:30:21 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:24.806717 | orchestrator | 2025-04-05 12:30:24 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:30:27.833396 | orchestrator | 2025-04-05 12:30:24 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:27.834311 | orchestrator | 2025-04-05 12:30:24 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:30:27.834363 | orchestrator | 2025-04-05 12:30:24 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:27.834402 | orchestrator | 2025-04-05 12:30:27 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:30:27.837339 | orchestrator | 2025-04-05 12:30:27 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:27.837392 | orchestrator | 2025-04-05 12:30:27 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:30:30.859390 | orchestrator | 2025-04-05 12:30:27 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:30.859509 | orchestrator | 2025-04-05 12:30:30 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:30:30.864899 | orchestrator | 2025-04-05 12:30:30 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:30.865521 | orchestrator | 2025-04-05 12:30:30 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:30:33.897366 | orchestrator | 2025-04-05 12:30:30 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:33.897451 | orchestrator | 2025-04-05 12:30:33 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:30:33.897693 | orchestrator | 2025-04-05 12:30:33 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:33.898683 | orchestrator | 2025-04-05 12:30:33 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:30:33.898697 | orchestrator | 2025-04-05 12:30:33 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:36.923757 | orchestrator | 2025-04-05 12:30:36 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:30:36.925007 | orchestrator | 2025-04-05 12:30:36 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:36.925718 | orchestrator | 2025-04-05 12:30:36 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:30:39.961117 | orchestrator | 2025-04-05 12:30:36 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:39.961248 | orchestrator | 2025-04-05 12:30:39 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:30:39.962069 | orchestrator | 2025-04-05 12:30:39 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:39.962432 | orchestrator | 2025-04-05 12:30:39 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:30:42.986468 | orchestrator | 2025-04-05 12:30:39 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:42.986692 | orchestrator | 2025-04-05 12:30:42 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:30:42.987178 | orchestrator | 2025-04-05 12:30:42 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:42.987214 | orchestrator | 2025-04-05 12:30:42 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:30:46.011853 | orchestrator | 2025-04-05 12:30:42 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:46.012064 | orchestrator | 2025-04-05 12:30:46 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:30:46.012561 | orchestrator | 2025-04-05 12:30:46 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:46.012617 | orchestrator | 2025-04-05 12:30:46 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:30:46.012638 | orchestrator | 2025-04-05 12:30:46 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:49.047034 | orchestrator | 2025-04-05 12:30:49 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:30:49.047714 | orchestrator | 2025-04-05 12:30:49 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:49.048730 | orchestrator | 2025-04-05 12:30:49 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:30:52.085620 | orchestrator | 2025-04-05 12:30:49 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:52.085750 | orchestrator | 2025-04-05 12:30:52 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:30:52.087366 | orchestrator | 2025-04-05 12:30:52 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:52.089613 | orchestrator | 2025-04-05 12:30:52 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:30:55.124639 | orchestrator | 2025-04-05 12:30:52 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:55.124820 | orchestrator | 2025-04-05 12:30:55 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:30:55.125382 | orchestrator | 2025-04-05 12:30:55 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:55.126843 | orchestrator | 2025-04-05 12:30:55 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:30:58.168315 | orchestrator | 2025-04-05 12:30:55 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:30:58.168461 | orchestrator | 2025-04-05 12:30:58 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:30:58.170002 | orchestrator | 2025-04-05 12:30:58 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:30:58.171631 | orchestrator | 2025-04-05 12:30:58 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:01.228148 | orchestrator | 2025-04-05 12:30:58 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:01.228280 | orchestrator | 2025-04-05 12:31:01 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:01.230216 | orchestrator | 2025-04-05 12:31:01 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:04.286496 | orchestrator | 2025-04-05 12:31:01 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:04.286607 | orchestrator | 2025-04-05 12:31:01 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:04.286641 | orchestrator | 2025-04-05 12:31:04 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:04.287686 | orchestrator | 2025-04-05 12:31:04 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:04.287836 | orchestrator | 2025-04-05 12:31:04 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:07.330093 | orchestrator | 2025-04-05 12:31:04 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:07.330227 | orchestrator | 2025-04-05 12:31:07 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:07.335064 | orchestrator | 2025-04-05 12:31:07 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:07.335916 | orchestrator | 2025-04-05 12:31:07 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:10.376322 | orchestrator | 2025-04-05 12:31:07 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:10.376440 | orchestrator | 2025-04-05 12:31:10 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:10.379156 | orchestrator | 2025-04-05 12:31:10 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:10.379192 | orchestrator | 2025-04-05 12:31:10 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:13.418702 | orchestrator | 2025-04-05 12:31:10 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:13.418856 | orchestrator | 2025-04-05 12:31:13 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:13.420621 | orchestrator | 2025-04-05 12:31:13 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:13.426325 | orchestrator | 2025-04-05 12:31:13 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:13.426611 | orchestrator | 2025-04-05 12:31:13 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:16.476486 | orchestrator | 2025-04-05 12:31:16 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:16.478266 | orchestrator | 2025-04-05 12:31:16 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:16.480019 | orchestrator | 2025-04-05 12:31:16 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:16.480181 | orchestrator | 2025-04-05 12:31:16 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:19.525483 | orchestrator | 2025-04-05 12:31:19 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:19.526873 | orchestrator | 2025-04-05 12:31:19 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:19.528111 | orchestrator | 2025-04-05 12:31:19 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:19.528218 | orchestrator | 2025-04-05 12:31:19 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:22.577383 | orchestrator | 2025-04-05 12:31:22 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:22.578634 | orchestrator | 2025-04-05 12:31:22 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:22.580720 | orchestrator | 2025-04-05 12:31:22 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:25.630560 | orchestrator | 2025-04-05 12:31:22 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:25.630680 | orchestrator | 2025-04-05 12:31:25 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:25.632563 | orchestrator | 2025-04-05 12:31:25 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:25.634657 | orchestrator | 2025-04-05 12:31:25 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:25.634810 | orchestrator | 2025-04-05 12:31:25 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:28.674743 | orchestrator | 2025-04-05 12:31:28 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:28.677979 | orchestrator | 2025-04-05 12:31:28 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:31.720997 | orchestrator | 2025-04-05 12:31:28 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:31.721140 | orchestrator | 2025-04-05 12:31:28 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:31.721174 | orchestrator | 2025-04-05 12:31:31 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:31.722709 | orchestrator | 2025-04-05 12:31:31 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:31.725554 | orchestrator | 2025-04-05 12:31:31 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:34.768552 | orchestrator | 2025-04-05 12:31:31 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:34.768700 | orchestrator | 2025-04-05 12:31:34 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:34.771927 | orchestrator | 2025-04-05 12:31:34 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:34.775119 | orchestrator | 2025-04-05 12:31:34 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:37.835140 | orchestrator | 2025-04-05 12:31:34 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:37.835275 | orchestrator | 2025-04-05 12:31:37 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:37.835976 | orchestrator | 2025-04-05 12:31:37 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:37.837157 | orchestrator | 2025-04-05 12:31:37 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:37.837417 | orchestrator | 2025-04-05 12:31:37 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:40.876699 | orchestrator | 2025-04-05 12:31:40 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:40.878146 | orchestrator | 2025-04-05 12:31:40 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:40.880914 | orchestrator | 2025-04-05 12:31:40 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:43.929395 | orchestrator | 2025-04-05 12:31:40 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:43.929516 | orchestrator | 2025-04-05 12:31:43 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:43.931367 | orchestrator | 2025-04-05 12:31:43 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:43.933501 | orchestrator | 2025-04-05 12:31:43 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:46.985272 | orchestrator | 2025-04-05 12:31:43 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:46.985406 | orchestrator | 2025-04-05 12:31:46 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:46.986649 | orchestrator | 2025-04-05 12:31:46 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:46.988149 | orchestrator | 2025-04-05 12:31:46 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:50.041913 | orchestrator | 2025-04-05 12:31:46 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:50.042089 | orchestrator | 2025-04-05 12:31:50 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:50.043613 | orchestrator | 2025-04-05 12:31:50 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:50.045831 | orchestrator | 2025-04-05 12:31:50 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:50.046499 | orchestrator | 2025-04-05 12:31:50 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:53.097354 | orchestrator | 2025-04-05 12:31:53 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:53.098294 | orchestrator | 2025-04-05 12:31:53 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:53.098336 | orchestrator | 2025-04-05 12:31:53 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:56.149721 | orchestrator | 2025-04-05 12:31:53 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:56.149875 | orchestrator | 2025-04-05 12:31:56 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:56.151311 | orchestrator | 2025-04-05 12:31:56 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:56.153197 | orchestrator | 2025-04-05 12:31:56 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:31:59.204208 | orchestrator | 2025-04-05 12:31:56 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:31:59.204322 | orchestrator | 2025-04-05 12:31:59 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:31:59.207255 | orchestrator | 2025-04-05 12:31:59 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:31:59.209611 | orchestrator | 2025-04-05 12:31:59 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:32:02.261942 | orchestrator | 2025-04-05 12:31:59 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:32:02.262121 | orchestrator | 2025-04-05 12:32:02 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:32:02.263905 | orchestrator | 2025-04-05 12:32:02 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:32:02.266151 | orchestrator | 2025-04-05 12:32:02 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:32:05.317005 | orchestrator | 2025-04-05 12:32:02 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:32:05.317141 | orchestrator | 2025-04-05 12:32:05 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:32:05.318749 | orchestrator | 2025-04-05 12:32:05 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:32:05.322900 | orchestrator | 2025-04-05 12:32:05 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:32:05.323239 | orchestrator | 2025-04-05 12:32:05 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:32:08.386913 | orchestrator | 2025-04-05 12:32:08 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:32:11.435907 | orchestrator | 2025-04-05 12:32:08 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:32:11.436001 | orchestrator | 2025-04-05 12:32:08 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:32:11.436013 | orchestrator | 2025-04-05 12:32:08 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:32:11.436035 | orchestrator | 2025-04-05 12:32:11 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:32:11.439257 | orchestrator | 2025-04-05 12:32:11 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:32:11.440429 | orchestrator | 2025-04-05 12:32:11 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:32:11.440553 | orchestrator | 2025-04-05 12:32:11 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:32:14.499586 | orchestrator | 2025-04-05 12:32:14 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:32:14.502220 | orchestrator | 2025-04-05 12:32:14 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:32:14.505185 | orchestrator | 2025-04-05 12:32:14 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:32:14.505304 | orchestrator | 2025-04-05 12:32:14 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:32:17.559115 | orchestrator | 2025-04-05 12:32:17 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:32:17.560083 | orchestrator | 2025-04-05 12:32:17 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:32:17.562383 | orchestrator | 2025-04-05 12:32:17 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:32:17.563181 | orchestrator | 2025-04-05 12:32:17 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:32:20.610546 | orchestrator | 2025-04-05 12:32:20 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:32:20.611150 | orchestrator | 2025-04-05 12:32:20 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:32:20.612387 | orchestrator | 2025-04-05 12:32:20 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:32:20.612629 | orchestrator | 2025-04-05 12:32:20 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:32:23.668134 | orchestrator | 2025-04-05 12:32:23 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:32:23.669409 | orchestrator | 2025-04-05 12:32:23 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:32:23.671377 | orchestrator | 2025-04-05 12:32:23 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:32:23.671720 | orchestrator | 2025-04-05 12:32:23 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:32:26.719381 | orchestrator | 2025-04-05 12:32:26 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:32:26.722269 | orchestrator | 2025-04-05 12:32:26 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:32:26.723425 | orchestrator | 2025-04-05 12:32:26 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:32:29.773456 | orchestrator | 2025-04-05 12:32:26 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:32:29.773601 | orchestrator | 2025-04-05 12:32:29 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state STARTED 2025-04-05 12:32:29.775011 | orchestrator | 2025-04-05 12:32:29 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:32:29.777045 | orchestrator | 2025-04-05 12:32:29 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:32:32.829164 | orchestrator | 2025-04-05 12:32:29 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:32:32.829295 | orchestrator | 2025-04-05 12:32:32 | INFO  | Task 8434ed85-df8b-4f80-9698-b47e91c24d33 is in state SUCCESS 2025-04-05 12:32:32.831597 | orchestrator | 2025-04-05 12:32:32.831638 | orchestrator | 2025-04-05 12:32:32.831652 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:32:32.831665 | orchestrator | 2025-04-05 12:32:32.831678 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:32:32.831691 | orchestrator | Saturday 05 April 2025 12:30:22 +0000 (0:00:00.202) 0:00:00.202 ******** 2025-04-05 12:32:32.831704 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:32:32.831718 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:32:32.831783 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:32:32.831797 | orchestrator | 2025-04-05 12:32:32.831810 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:32:32.831823 | orchestrator | Saturday 05 April 2025 12:30:23 +0000 (0:00:00.279) 0:00:00.481 ******** 2025-04-05 12:32:32.831836 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-04-05 12:32:32.831849 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-04-05 12:32:32.831862 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-04-05 12:32:32.831874 | orchestrator | 2025-04-05 12:32:32.831886 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-04-05 12:32:32.831899 | orchestrator | 2025-04-05 12:32:32.831911 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-05 12:32:32.831923 | orchestrator | Saturday 05 April 2025 12:30:23 +0000 (0:00:00.299) 0:00:00.780 ******** 2025-04-05 12:32:32.831936 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:32:32.831948 | orchestrator | 2025-04-05 12:32:32.831961 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-04-05 12:32:32.831973 | orchestrator | Saturday 05 April 2025 12:30:23 +0000 (0:00:00.437) 0:00:01.218 ******** 2025-04-05 12:32:32.831986 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-05 12:32:32.832012 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-05 12:32:32.832025 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-05 12:32:32.832037 | orchestrator | 2025-04-05 12:32:32.832049 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-04-05 12:32:32.832062 | orchestrator | Saturday 05 April 2025 12:30:24 +0000 (0:00:00.838) 0:00:02.057 ******** 2025-04-05 12:32:32.832079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-05 12:32:32.832096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-05 12:32:32.832121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-05 12:32:32.832143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-05 12:32:32.832159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-05 12:32:32.832173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-05 12:32:32.832188 | orchestrator | 2025-04-05 12:32:32.832202 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-05 12:32:32.832216 | orchestrator | Saturday 05 April 2025 12:30:26 +0000 (0:00:01.378) 0:00:03.435 ******** 2025-04-05 12:32:32.832230 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:32:32.832250 | orchestrator | 2025-04-05 12:32:32.832265 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-04-05 12:32:32.832278 | orchestrator | Saturday 05 April 2025 12:30:26 +0000 (0:00:00.534) 0:00:03.970 ******** 2025-04-05 12:32:32.832302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-05 12:32:32.832318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-05 12:32:32.832334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-05 12:32:32.832349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-05 12:32:32.832370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-05 12:32:32.832393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-05 12:32:32.832408 | orchestrator | 2025-04-05 12:32:32.832422 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-04-05 12:32:32.832437 | orchestrator | Saturday 05 April 2025 12:30:29 +0000 (0:00:02.883) 0:00:06.854 ******** 2025-04-05 12:32:32.832452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-05 12:32:32.832468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-05 12:32:32.832488 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:32:32.832504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-05 12:32:32.832527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-05 12:32:32.832542 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:32:32.832555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-05 12:32:32.832569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-05 12:32:32.832589 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:32:32.832601 | orchestrator | 2025-04-05 12:32:32.832614 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-04-05 12:32:32.832626 | orchestrator | Saturday 05 April 2025 12:30:30 +0000 (0:00:00.714) 0:00:07.569 ******** 2025-04-05 12:32:32.832639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-05 12:32:32.832660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-05 12:32:32.832674 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:32:32.832687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-05 12:32:32.832701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-05 12:32:32.832721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-05 12:32:32.832734 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:32:32.832771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-05 12:32:32.832785 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:32:32.832798 | orchestrator | 2025-04-05 12:32:32.832811 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-04-05 12:32:32.832823 | orchestrator | Saturday 05 April 2025 12:30:31 +0000 (0:00:00.867) 0:00:08.437 ******** 2025-04-05 12:32:32.832836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-05 12:32:32.832850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-05 12:32:32.832870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-05 12:32:32.832890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-05 12:32:32.832904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-05 12:32:32.832917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-05 12:32:32.832936 | orchestrator | 2025-04-05 12:32:32.832949 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-04-05 12:32:32.832961 | orchestrator | Saturday 05 April 2025 12:30:33 +0000 (0:00:02.269) 0:00:10.706 ******** 2025-04-05 12:32:32.832974 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:32:32.832987 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:32:32.832999 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:32:32.833012 | orchestrator | 2025-04-05 12:32:32.833024 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-04-05 12:32:32.833037 | orchestrator | Saturday 05 April 2025 12:30:36 +0000 (0:00:02.741) 0:00:13.447 ******** 2025-04-05 12:32:32.833049 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:32:32.833061 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:32:32.833074 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:32:32.833087 | orchestrator | 2025-04-05 12:32:32.833099 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-04-05 12:32:32.833112 | orchestrator | Saturday 05 April 2025 12:30:37 +0000 (0:00:01.792) 0:00:15.239 ******** 2025-04-05 12:32:32.833125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-05 12:32:32.833145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-05 12:32:32.833159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-05 12:32:32.833173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-05 12:32:32.833192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-05 12:32:32.833211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-05 12:32:32.833225 | orchestrator | 2025-04-05 12:32:32.833238 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-05 12:32:32.833250 | orchestrator | Saturday 05 April 2025 12:30:40 +0000 (0:00:02.731) 0:00:17.971 ******** 2025-04-05 12:32:32.833262 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:32:32.833275 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:32:32.833287 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:32:32.833299 | orchestrator | 2025-04-05 12:32:32.833312 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-04-05 12:32:32.833324 | orchestrator | Saturday 05 April 2025 12:30:40 +0000 (0:00:00.285) 0:00:18.257 ******** 2025-04-05 12:32:32.833336 | orchestrator | 2025-04-05 12:32:32.833349 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-04-05 12:32:32.833361 | orchestrator | Saturday 05 April 2025 12:30:41 +0000 (0:00:00.121) 0:00:18.378 ******** 2025-04-05 12:32:32.833379 | orchestrator | 2025-04-05 12:32:32.833391 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-04-05 12:32:32.833404 | orchestrator | Saturday 05 April 2025 12:30:41 +0000 (0:00:00.039) 0:00:18.418 ******** 2025-04-05 12:32:32.833416 | orchestrator | 2025-04-05 12:32:32.833428 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-04-05 12:32:32.833440 | orchestrator | Saturday 05 April 2025 12:30:41 +0000 (0:00:00.041) 0:00:18.459 ******** 2025-04-05 12:32:32.833453 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:32:32.833465 | orchestrator | 2025-04-05 12:32:32.833477 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-04-05 12:32:32.833490 | orchestrator | Saturday 05 April 2025 12:30:41 +0000 (0:00:00.146) 0:00:18.606 ******** 2025-04-05 12:32:32.833502 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:32:32.833514 | orchestrator | 2025-04-05 12:32:32.833532 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-04-05 12:32:32.833545 | orchestrator | Saturday 05 April 2025 12:30:41 +0000 (0:00:00.319) 0:00:18.925 ******** 2025-04-05 12:32:32.833558 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:32:32.833570 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:32:32.833583 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:32:32.833595 | orchestrator | 2025-04-05 12:32:32.833608 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-04-05 12:32:32.833730 | orchestrator | Saturday 05 April 2025 12:31:25 +0000 (0:00:44.417) 0:01:03.342 ******** 2025-04-05 12:32:32.833746 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:32:32.833776 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:32:32.833789 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:32:32.833801 | orchestrator | 2025-04-05 12:32:32.833814 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-05 12:32:32.833826 | orchestrator | Saturday 05 April 2025 12:32:20 +0000 (0:00:54.500) 0:01:57.843 ******** 2025-04-05 12:32:32.833839 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:32:32.833851 | orchestrator | 2025-04-05 12:32:32.833864 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-04-05 12:32:32.833876 | orchestrator | Saturday 05 April 2025 12:32:20 +0000 (0:00:00.444) 0:01:58.287 ******** 2025-04-05 12:32:32.833888 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:32:32.833901 | orchestrator | 2025-04-05 12:32:32.833913 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-04-05 12:32:32.833926 | orchestrator | Saturday 05 April 2025 12:32:23 +0000 (0:00:02.314) 0:02:00.601 ******** 2025-04-05 12:32:32.833938 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:32:32.833951 | orchestrator | 2025-04-05 12:32:32.833963 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-04-05 12:32:32.833976 | orchestrator | Saturday 05 April 2025 12:32:25 +0000 (0:00:02.010) 0:02:02.612 ******** 2025-04-05 12:32:32.833988 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:32:32.834001 | orchestrator | 2025-04-05 12:32:32.834013 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-04-05 12:32:32.834071 | orchestrator | Saturday 05 April 2025 12:32:27 +0000 (0:00:02.371) 0:02:04.984 ******** 2025-04-05 12:32:32.834084 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:32:32.834097 | orchestrator | 2025-04-05 12:32:32.834109 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:32:32.834122 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-05 12:32:32.834136 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-05 12:32:32.834149 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-05 12:32:32.834171 | orchestrator | 2025-04-05 12:32:32.834183 | orchestrator | 2025-04-05 12:32:32.834196 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:32:32.834215 | orchestrator | Saturday 05 April 2025 12:32:30 +0000 (0:00:02.492) 0:02:07.476 ******** 2025-04-05 12:32:35.872734 | orchestrator | =============================================================================== 2025-04-05 12:32:35.872898 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 54.50s 2025-04-05 12:32:35.872916 | orchestrator | opensearch : Restart opensearch container ------------------------------ 44.42s 2025-04-05 12:32:35.872930 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.88s 2025-04-05 12:32:35.872945 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.74s 2025-04-05 12:32:35.872959 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.73s 2025-04-05 12:32:35.872973 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.49s 2025-04-05 12:32:35.872988 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.37s 2025-04-05 12:32:35.873002 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.31s 2025-04-05 12:32:35.873016 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.27s 2025-04-05 12:32:35.873030 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.01s 2025-04-05 12:32:35.873044 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.79s 2025-04-05 12:32:35.873058 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.38s 2025-04-05 12:32:35.873073 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.87s 2025-04-05 12:32:35.873104 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.84s 2025-04-05 12:32:35.873119 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.71s 2025-04-05 12:32:35.873133 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-04-05 12:32:35.873147 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.44s 2025-04-05 12:32:35.873161 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.44s 2025-04-05 12:32:35.873175 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.32s 2025-04-05 12:32:35.873189 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.30s 2025-04-05 12:32:35.873202 | orchestrator | 2025-04-05 12:32:32 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:32:35.873217 | orchestrator | 2025-04-05 12:32:32 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:32:35.873231 | orchestrator | 2025-04-05 12:32:32 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:32:35.873261 | orchestrator | 2025-04-05 12:32:35 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:32:35.874701 | orchestrator | 2025-04-05 12:32:35 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:32:38.922571 | orchestrator | 2025-04-05 12:32:35 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:32:38.922684 | orchestrator | 2025-04-05 12:32:38 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:32:38.925183 | orchestrator | 2025-04-05 12:32:38 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:32:38.925299 | orchestrator | 2025-04-05 12:32:38 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:32:41.971240 | orchestrator | 2025-04-05 12:32:41 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:32:41.972940 | orchestrator | 2025-04-05 12:32:41 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:32:45.029859 | orchestrator | 2025-04-05 12:32:41 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:32:45.029989 | orchestrator | 2025-04-05 12:32:45 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:32:45.031615 | orchestrator | 2025-04-05 12:32:45 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:32:45.031654 | orchestrator | 2025-04-05 12:32:45 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:32:48.076663 | orchestrator | 2025-04-05 12:32:48 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:32:48.077433 | orchestrator | 2025-04-05 12:32:48 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:32:51.125351 | orchestrator | 2025-04-05 12:32:48 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:32:51.125484 | orchestrator | 2025-04-05 12:32:51 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:32:51.125989 | orchestrator | 2025-04-05 12:32:51 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:32:51.126495 | orchestrator | 2025-04-05 12:32:51 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:32:54.163659 | orchestrator | 2025-04-05 12:32:54 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:32:54.165286 | orchestrator | 2025-04-05 12:32:54 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:32:54.165947 | orchestrator | 2025-04-05 12:32:54 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:32:57.208682 | orchestrator | 2025-04-05 12:32:57 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:32:57.210254 | orchestrator | 2025-04-05 12:32:57 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:33:00.259888 | orchestrator | 2025-04-05 12:32:57 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:00.260021 | orchestrator | 2025-04-05 12:33:00 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:33:00.261587 | orchestrator | 2025-04-05 12:33:00 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:33:03.305077 | orchestrator | 2025-04-05 12:33:00 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:03.305197 | orchestrator | 2025-04-05 12:33:03 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:33:03.307006 | orchestrator | 2025-04-05 12:33:03 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:33:03.307308 | orchestrator | 2025-04-05 12:33:03 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:06.361740 | orchestrator | 2025-04-05 12:33:06 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:33:06.362635 | orchestrator | 2025-04-05 12:33:06 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:33:09.406825 | orchestrator | 2025-04-05 12:33:06 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:09.406949 | orchestrator | 2025-04-05 12:33:09 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:33:09.407606 | orchestrator | 2025-04-05 12:33:09 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state STARTED 2025-04-05 12:33:12.450818 | orchestrator | 2025-04-05 12:33:09 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:12.450947 | orchestrator | 2025-04-05 12:33:12 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:33:12.453705 | orchestrator | 2025-04-05 12:33:12 | INFO  | Task 22b33a80-813f-433a-b15d-0cd71225fb55 is in state SUCCESS 2025-04-05 12:33:12.454908 | orchestrator | 2025-04-05 12:33:12.454947 | orchestrator | 2025-04-05 12:33:12.454960 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-04-05 12:33:12.454973 | orchestrator | 2025-04-05 12:33:12.454986 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-04-05 12:33:12.454999 | orchestrator | Saturday 05 April 2025 12:30:22 +0000 (0:00:00.072) 0:00:00.072 ******** 2025-04-05 12:33:12.455011 | orchestrator | ok: [localhost] => { 2025-04-05 12:33:12.455026 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-04-05 12:33:12.455039 | orchestrator | } 2025-04-05 12:33:12.455051 | orchestrator | 2025-04-05 12:33:12.455064 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-04-05 12:33:12.455076 | orchestrator | Saturday 05 April 2025 12:30:22 +0000 (0:00:00.034) 0:00:00.107 ******** 2025-04-05 12:33:12.455089 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-04-05 12:33:12.455103 | orchestrator | ...ignoring 2025-04-05 12:33:12.455116 | orchestrator | 2025-04-05 12:33:12.455128 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-04-05 12:33:12.455141 | orchestrator | Saturday 05 April 2025 12:30:25 +0000 (0:00:02.428) 0:00:02.535 ******** 2025-04-05 12:33:12.455153 | orchestrator | skipping: [localhost] 2025-04-05 12:33:12.455166 | orchestrator | 2025-04-05 12:33:12.455178 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-04-05 12:33:12.455190 | orchestrator | Saturday 05 April 2025 12:30:25 +0000 (0:00:00.047) 0:00:02.583 ******** 2025-04-05 12:33:12.455203 | orchestrator | ok: [localhost] 2025-04-05 12:33:12.455215 | orchestrator | 2025-04-05 12:33:12.455228 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:33:12.455240 | orchestrator | 2025-04-05 12:33:12.455252 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:33:12.455265 | orchestrator | Saturday 05 April 2025 12:30:25 +0000 (0:00:00.133) 0:00:02.716 ******** 2025-04-05 12:33:12.455277 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:12.455289 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:12.455302 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:12.455325 | orchestrator | 2025-04-05 12:33:12.455338 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:33:12.455350 | orchestrator | Saturday 05 April 2025 12:30:25 +0000 (0:00:00.303) 0:00:03.019 ******** 2025-04-05 12:33:12.455362 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-04-05 12:33:12.455375 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-04-05 12:33:12.455388 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-04-05 12:33:12.455400 | orchestrator | 2025-04-05 12:33:12.455412 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-04-05 12:33:12.455424 | orchestrator | 2025-04-05 12:33:12.455437 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-04-05 12:33:12.455449 | orchestrator | Saturday 05 April 2025 12:30:26 +0000 (0:00:00.427) 0:00:03.447 ******** 2025-04-05 12:33:12.455461 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-05 12:33:12.455474 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-04-05 12:33:12.455487 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-04-05 12:33:12.455501 | orchestrator | 2025-04-05 12:33:12.455514 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-05 12:33:12.455528 | orchestrator | Saturday 05 April 2025 12:30:26 +0000 (0:00:00.285) 0:00:03.733 ******** 2025-04-05 12:33:12.455541 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:12.455570 | orchestrator | 2025-04-05 12:33:12.455584 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-04-05 12:33:12.455598 | orchestrator | Saturday 05 April 2025 12:30:26 +0000 (0:00:00.528) 0:00:04.262 ******** 2025-04-05 12:33:12.455629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-05 12:33:12.455648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-05 12:33:12.455665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-05 12:33:12.455688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-05 12:33:12.455711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-05 12:33:12.455727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-05 12:33:12.455741 | orchestrator | 2025-04-05 12:33:12.455788 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-04-05 12:33:12.455803 | orchestrator | Saturday 05 April 2025 12:30:29 +0000 (0:00:02.859) 0:00:07.121 ******** 2025-04-05 12:33:12.455817 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:12.455832 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:12.455846 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:12.455865 | orchestrator | 2025-04-05 12:33:12.455878 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-04-05 12:33:12.455891 | orchestrator | Saturday 05 April 2025 12:30:30 +0000 (0:00:00.735) 0:00:07.856 ******** 2025-04-05 12:33:12.455904 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:12.455916 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:12.455928 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:12.455940 | orchestrator | 2025-04-05 12:33:12.455958 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-04-05 12:33:12.455971 | orchestrator | Saturday 05 April 2025 12:30:31 +0000 (0:00:01.428) 0:00:09.285 ******** 2025-04-05 12:33:12.455991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-05 12:33:12.456006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-05 12:33:12.456019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-05 12:33:12.456047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-05 12:33:12.456062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-05 12:33:12.456075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-05 12:33:12.456088 | orchestrator | 2025-04-05 12:33:12.456101 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-04-05 12:33:12.456148 | orchestrator | Saturday 05 April 2025 12:30:35 +0000 (0:00:03.709) 0:00:12.994 ******** 2025-04-05 12:33:12.456162 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:12.456174 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:12.456187 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:12.456199 | orchestrator | 2025-04-05 12:33:12.456211 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-04-05 12:33:12.456224 | orchestrator | Saturday 05 April 2025 12:30:36 +0000 (0:00:01.079) 0:00:14.073 ******** 2025-04-05 12:33:12.456236 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:12.456249 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:12.456261 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:12.456274 | orchestrator | 2025-04-05 12:33:12.456286 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-04-05 12:33:12.456299 | orchestrator | Saturday 05 April 2025 12:30:42 +0000 (0:00:05.671) 0:00:19.744 ******** 2025-04-05 12:33:12.456311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-05 12:33:12.456333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-05 12:33:12.456347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-05 12:33:12.456367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-05 12:33:12.456387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-05 12:33:12.456402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-05 12:33:12.456420 | orchestrator | 2025-04-05 12:33:12.456433 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-04-05 12:33:12.456446 | orchestrator | Saturday 05 April 2025 12:30:46 +0000 (0:00:03.718) 0:00:23.463 ******** 2025-04-05 12:33:12.456458 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:12.456471 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:12.456483 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:12.456496 | orchestrator | 2025-04-05 12:33:12.456508 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-04-05 12:33:12.456520 | orchestrator | Saturday 05 April 2025 12:30:47 +0000 (0:00:01.117) 0:00:24.580 ******** 2025-04-05 12:33:12.456533 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:12.456545 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:12.456558 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:12.456570 | orchestrator | 2025-04-05 12:33:12.456583 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-04-05 12:33:12.456595 | orchestrator | Saturday 05 April 2025 12:30:47 +0000 (0:00:00.310) 0:00:24.891 ******** 2025-04-05 12:33:12.456607 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:12.456619 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:12.456632 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:12.456644 | orchestrator | 2025-04-05 12:33:12.456657 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-04-05 12:33:12.456669 | orchestrator | Saturday 05 April 2025 12:30:47 +0000 (0:00:00.245) 0:00:25.137 ******** 2025-04-05 12:33:12.456682 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-04-05 12:33:12.456695 | orchestrator | ...ignoring 2025-04-05 12:33:12.456707 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-04-05 12:33:12.456720 | orchestrator | ...ignoring 2025-04-05 12:33:12.456732 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-04-05 12:33:12.456745 | orchestrator | ...ignoring 2025-04-05 12:33:12.456774 | orchestrator | 2025-04-05 12:33:12.456786 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-04-05 12:33:12.456799 | orchestrator | Saturday 05 April 2025 12:30:58 +0000 (0:00:10.850) 0:00:35.987 ******** 2025-04-05 12:33:12.456811 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:12.456823 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:12.456836 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:12.456848 | orchestrator | 2025-04-05 12:33:12.456861 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-04-05 12:33:12.456873 | orchestrator | Saturday 05 April 2025 12:30:59 +0000 (0:00:00.494) 0:00:36.482 ******** 2025-04-05 12:33:12.456885 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:12.456898 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:12.456910 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:12.456922 | orchestrator | 2025-04-05 12:33:12.456935 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-04-05 12:33:12.456947 | orchestrator | Saturday 05 April 2025 12:30:59 +0000 (0:00:00.544) 0:00:37.026 ******** 2025-04-05 12:33:12.456959 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:12.456971 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:12.456984 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:12.456996 | orchestrator | 2025-04-05 12:33:12.457014 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-04-05 12:33:12.457027 | orchestrator | Saturday 05 April 2025 12:31:00 +0000 (0:00:00.467) 0:00:37.494 ******** 2025-04-05 12:33:12.457045 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:12.457058 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:12.457070 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:12.457082 | orchestrator | 2025-04-05 12:33:12.457095 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-04-05 12:33:12.457108 | orchestrator | Saturday 05 April 2025 12:31:00 +0000 (0:00:00.653) 0:00:38.148 ******** 2025-04-05 12:33:12.457125 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:12.457138 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:12.457151 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:12.457163 | orchestrator | 2025-04-05 12:33:12.457175 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-04-05 12:33:12.457188 | orchestrator | Saturday 05 April 2025 12:31:01 +0000 (0:00:00.525) 0:00:38.674 ******** 2025-04-05 12:33:12.457200 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:12.457212 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:12.457225 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:12.457237 | orchestrator | 2025-04-05 12:33:12.457249 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-05 12:33:12.457261 | orchestrator | Saturday 05 April 2025 12:31:01 +0000 (0:00:00.356) 0:00:39.031 ******** 2025-04-05 12:33:12.457274 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:12.457286 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:12.457298 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-04-05 12:33:12.457310 | orchestrator | 2025-04-05 12:33:12.457323 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-04-05 12:33:12.457347 | orchestrator | Saturday 05 April 2025 12:31:02 +0000 (0:00:00.404) 0:00:39.435 ******** 2025-04-05 12:33:12.457360 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:12.457373 | orchestrator | 2025-04-05 12:33:12.457385 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-04-05 12:33:12.457397 | orchestrator | Saturday 05 April 2025 12:31:11 +0000 (0:00:09.049) 0:00:48.485 ******** 2025-04-05 12:33:12.457410 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:12.457422 | orchestrator | 2025-04-05 12:33:12.457434 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-05 12:33:12.457447 | orchestrator | Saturday 05 April 2025 12:31:11 +0000 (0:00:00.109) 0:00:48.595 ******** 2025-04-05 12:33:12.457459 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:12.457471 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:12.457484 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:12.457496 | orchestrator | 2025-04-05 12:33:12.457508 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-04-05 12:33:12.457521 | orchestrator | Saturday 05 April 2025 12:31:12 +0000 (0:00:00.820) 0:00:49.415 ******** 2025-04-05 12:33:12.457533 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:12.457545 | orchestrator | 2025-04-05 12:33:12.457557 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-04-05 12:33:12.457569 | orchestrator | Saturday 05 April 2025 12:31:17 +0000 (0:00:05.736) 0:00:55.152 ******** 2025-04-05 12:33:12.457582 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-04-05 12:33:12.457594 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:12.457607 | orchestrator | 2025-04-05 12:33:12.457619 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-04-05 12:33:12.457631 | orchestrator | Saturday 05 April 2025 12:31:25 +0000 (0:00:07.272) 0:01:02.425 ******** 2025-04-05 12:33:12.457644 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:12.457656 | orchestrator | 2025-04-05 12:33:12.457668 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-04-05 12:33:12.457681 | orchestrator | Saturday 05 April 2025 12:31:27 +0000 (0:00:02.538) 0:01:04.964 ******** 2025-04-05 12:33:12.457710 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:12.457722 | orchestrator | 2025-04-05 12:33:12.457735 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-04-05 12:33:12.457794 | orchestrator | Saturday 05 April 2025 12:31:27 +0000 (0:00:00.212) 0:01:05.176 ******** 2025-04-05 12:33:12.457808 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:12.457822 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:12.457843 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:12.457856 | orchestrator | 2025-04-05 12:33:12.457869 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-04-05 12:33:12.457881 | orchestrator | Saturday 05 April 2025 12:31:28 +0000 (0:00:00.676) 0:01:05.852 ******** 2025-04-05 12:33:12.457894 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:12.457907 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-04-05 12:33:12.457919 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:12.457932 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:12.457944 | orchestrator | 2025-04-05 12:33:12.457956 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-04-05 12:33:12.457968 | orchestrator | skipping: no hosts matched 2025-04-05 12:33:12.457981 | orchestrator | 2025-04-05 12:33:12.457993 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-04-05 12:33:12.458005 | orchestrator | 2025-04-05 12:33:12.458060 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-04-05 12:33:12.458076 | orchestrator | Saturday 05 April 2025 12:31:28 +0000 (0:00:00.382) 0:01:06.235 ******** 2025-04-05 12:33:12.458089 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:12.458101 | orchestrator | 2025-04-05 12:33:12.458114 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-04-05 12:33:12.458126 | orchestrator | Saturday 05 April 2025 12:31:44 +0000 (0:00:16.057) 0:01:22.292 ******** 2025-04-05 12:33:12.458138 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:12.458150 | orchestrator | 2025-04-05 12:33:12.458163 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-04-05 12:33:12.458175 | orchestrator | Saturday 05 April 2025 12:32:04 +0000 (0:00:19.512) 0:01:41.805 ******** 2025-04-05 12:33:12.458187 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:12.458199 | orchestrator | 2025-04-05 12:33:12.458211 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-04-05 12:33:12.458224 | orchestrator | 2025-04-05 12:33:12.458236 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-04-05 12:33:12.458248 | orchestrator | Saturday 05 April 2025 12:32:06 +0000 (0:00:02.237) 0:01:44.042 ******** 2025-04-05 12:33:12.458261 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:12.458273 | orchestrator | 2025-04-05 12:33:12.458286 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-04-05 12:33:12.458306 | orchestrator | Saturday 05 April 2025 12:32:21 +0000 (0:00:14.433) 0:01:58.475 ******** 2025-04-05 12:33:12.458320 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:12.458333 | orchestrator | 2025-04-05 12:33:12.458346 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-04-05 12:33:12.458363 | orchestrator | Saturday 05 April 2025 12:32:40 +0000 (0:00:19.498) 0:02:17.974 ******** 2025-04-05 12:33:12.458376 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:12.458388 | orchestrator | 2025-04-05 12:33:12.458401 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-04-05 12:33:12.458414 | orchestrator | 2025-04-05 12:33:12.458427 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-04-05 12:33:12.458439 | orchestrator | Saturday 05 April 2025 12:32:42 +0000 (0:00:02.248) 0:02:20.222 ******** 2025-04-05 12:33:12.458451 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:12.458464 | orchestrator | 2025-04-05 12:33:12.458477 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-04-05 12:33:12.458489 | orchestrator | Saturday 05 April 2025 12:32:53 +0000 (0:00:10.841) 0:02:31.064 ******** 2025-04-05 12:33:12.458510 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:12.458522 | orchestrator | 2025-04-05 12:33:12.458535 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-04-05 12:33:12.458547 | orchestrator | Saturday 05 April 2025 12:32:57 +0000 (0:00:03.522) 0:02:34.586 ******** 2025-04-05 12:33:12.458560 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:12.458573 | orchestrator | 2025-04-05 12:33:12.458586 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-04-05 12:33:12.458598 | orchestrator | 2025-04-05 12:33:12.458611 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-04-05 12:33:12.458624 | orchestrator | Saturday 05 April 2025 12:32:59 +0000 (0:00:02.065) 0:02:36.652 ******** 2025-04-05 12:33:12.458637 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:12.458649 | orchestrator | 2025-04-05 12:33:12.458662 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-04-05 12:33:12.458674 | orchestrator | Saturday 05 April 2025 12:32:59 +0000 (0:00:00.647) 0:02:37.299 ******** 2025-04-05 12:33:12.458686 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:12.458699 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:12.458711 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:12.458724 | orchestrator | 2025-04-05 12:33:12.458736 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-04-05 12:33:12.458789 | orchestrator | Saturday 05 April 2025 12:33:02 +0000 (0:00:02.237) 0:02:39.537 ******** 2025-04-05 12:33:12.458804 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:12.458816 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:12.458828 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:12.458841 | orchestrator | 2025-04-05 12:33:12.458854 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-04-05 12:33:12.458866 | orchestrator | Saturday 05 April 2025 12:33:04 +0000 (0:00:02.033) 0:02:41.570 ******** 2025-04-05 12:33:12.458878 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:12.458891 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:12.458903 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:12.458915 | orchestrator | 2025-04-05 12:33:12.458928 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-04-05 12:33:12.458941 | orchestrator | Saturday 05 April 2025 12:33:06 +0000 (0:00:02.327) 0:02:43.897 ******** 2025-04-05 12:33:12.458953 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:12.458965 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:12.458978 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:12.458990 | orchestrator | 2025-04-05 12:33:12.459003 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-04-05 12:33:12.459015 | orchestrator | Saturday 05 April 2025 12:33:08 +0000 (0:00:02.060) 0:02:45.958 ******** 2025-04-05 12:33:12.459028 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:12.459040 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:12.459052 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:12.459065 | orchestrator | 2025-04-05 12:33:12.459077 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-04-05 12:33:12.459089 | orchestrator | Saturday 05 April 2025 12:33:11 +0000 (0:00:02.464) 0:02:48.422 ******** 2025-04-05 12:33:12.459188 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:12.459204 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:12.459217 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:12.459229 | orchestrator | 2025-04-05 12:33:12.459239 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:33:12.459250 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-04-05 12:33:12.459267 | orchestrator | testbed-node-0 : ok=33  changed=16  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-04-05 12:33:12.459284 | orchestrator | testbed-node-1 : ok=19  changed=7  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-04-05 12:33:12.459295 | orchestrator | testbed-node-2 : ok=19  changed=7  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-04-05 12:33:12.459305 | orchestrator | 2025-04-05 12:33:12.459315 | orchestrator | 2025-04-05 12:33:12.459325 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:33:12.459335 | orchestrator | Saturday 05 April 2025 12:33:11 +0000 (0:00:00.274) 0:02:48.697 ******** 2025-04-05 12:33:12.459345 | orchestrator | =============================================================================== 2025-04-05 12:33:12.459355 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 39.01s 2025-04-05 12:33:12.459365 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 30.49s 2025-04-05 12:33:12.459380 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.85s 2025-04-05 12:33:15.506543 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.84s 2025-04-05 12:33:15.506647 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.05s 2025-04-05 12:33:15.506664 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.27s 2025-04-05 12:33:15.506680 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 5.74s 2025-04-05 12:33:15.506693 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.67s 2025-04-05 12:33:15.506708 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.49s 2025-04-05 12:33:15.506723 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.72s 2025-04-05 12:33:15.506737 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.71s 2025-04-05 12:33:15.506801 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 3.52s 2025-04-05 12:33:15.506817 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.86s 2025-04-05 12:33:15.506831 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.54s 2025-04-05 12:33:15.506845 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.46s 2025-04-05 12:33:15.506859 | orchestrator | Check MariaDB service --------------------------------------------------- 2.43s 2025-04-05 12:33:15.506873 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.33s 2025-04-05 12:33:15.506887 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.24s 2025-04-05 12:33:15.506901 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.07s 2025-04-05 12:33:15.506915 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.06s 2025-04-05 12:33:15.506929 | orchestrator | 2025-04-05 12:33:12 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:15.506960 | orchestrator | 2025-04-05 12:33:15 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:33:15.507962 | orchestrator | 2025-04-05 12:33:15 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:33:15.509665 | orchestrator | 2025-04-05 12:33:15 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:33:15.510535 | orchestrator | 2025-04-05 12:33:15 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:18.568339 | orchestrator | 2025-04-05 12:33:18 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:33:18.569962 | orchestrator | 2025-04-05 12:33:18 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:33:18.570714 | orchestrator | 2025-04-05 12:33:18 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:33:21.608280 | orchestrator | 2025-04-05 12:33:18 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:21.608403 | orchestrator | 2025-04-05 12:33:21 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:33:21.608813 | orchestrator | 2025-04-05 12:33:21 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:33:21.609495 | orchestrator | 2025-04-05 12:33:21 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:33:24.637834 | orchestrator | 2025-04-05 12:33:21 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:24.637977 | orchestrator | 2025-04-05 12:33:24 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:33:24.641966 | orchestrator | 2025-04-05 12:33:24 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:33:27.675013 | orchestrator | 2025-04-05 12:33:24 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state STARTED 2025-04-05 12:33:27.675120 | orchestrator | 2025-04-05 12:33:24 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:27.675155 | orchestrator | 2025-04-05 12:33:27 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:33:27.677576 | orchestrator | 2025-04-05 12:33:27 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:33:27.678283 | orchestrator | 2025-04-05 12:33:27 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:33:27.681807 | orchestrator | 2025-04-05 12:33:27 | INFO  | Task 8262bc73-7a82-47e7-b59f-ada502806635 is in state SUCCESS 2025-04-05 12:33:27.683466 | orchestrator | 2025-04-05 12:33:27.683697 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-05 12:33:27.685520 | orchestrator | 2025-04-05 12:33:27.688219 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-04-05 12:33:27.688291 | orchestrator | 2025-04-05 12:33:27.688305 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-04-05 12:33:27.688317 | orchestrator | Saturday 05 April 2025 12:21:54 +0000 (0:00:01.605) 0:00:01.605 ******** 2025-04-05 12:33:27.688329 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.688340 | orchestrator | 2025-04-05 12:33:27.688351 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-04-05 12:33:27.688362 | orchestrator | Saturday 05 April 2025 12:21:56 +0000 (0:00:01.231) 0:00:02.836 ******** 2025-04-05 12:33:27.688373 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-04-05 12:33:27.688383 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-04-05 12:33:27.688394 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-04-05 12:33:27.688404 | orchestrator | 2025-04-05 12:33:27.688414 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-04-05 12:33:27.688424 | orchestrator | Saturday 05 April 2025 12:21:56 +0000 (0:00:00.602) 0:00:03.438 ******** 2025-04-05 12:33:27.688435 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.688446 | orchestrator | 2025-04-05 12:33:27.688456 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-04-05 12:33:27.688466 | orchestrator | Saturday 05 April 2025 12:21:57 +0000 (0:00:00.971) 0:00:04.410 ******** 2025-04-05 12:33:27.688476 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.688487 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.688498 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.688526 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.688536 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.688546 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.688556 | orchestrator | 2025-04-05 12:33:27.688567 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-04-05 12:33:27.688577 | orchestrator | Saturday 05 April 2025 12:21:59 +0000 (0:00:01.439) 0:00:05.849 ******** 2025-04-05 12:33:27.688587 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.688597 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.688607 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.688617 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.688627 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.688637 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.688647 | orchestrator | 2025-04-05 12:33:27.688658 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-04-05 12:33:27.688668 | orchestrator | Saturday 05 April 2025 12:21:59 +0000 (0:00:00.799) 0:00:06.649 ******** 2025-04-05 12:33:27.688678 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.688688 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.688698 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.688708 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.688718 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.688728 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.688772 | orchestrator | 2025-04-05 12:33:27.688783 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-04-05 12:33:27.688798 | orchestrator | Saturday 05 April 2025 12:22:01 +0000 (0:00:01.353) 0:00:08.002 ******** 2025-04-05 12:33:27.688808 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.688818 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.688829 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.688841 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.688852 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.688864 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.688875 | orchestrator | 2025-04-05 12:33:27.688887 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-04-05 12:33:27.688898 | orchestrator | Saturday 05 April 2025 12:22:02 +0000 (0:00:00.824) 0:00:08.826 ******** 2025-04-05 12:33:27.688909 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.688920 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.688932 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.688943 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.688954 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.688965 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.688977 | orchestrator | 2025-04-05 12:33:27.688988 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-04-05 12:33:27.689000 | orchestrator | Saturday 05 April 2025 12:22:02 +0000 (0:00:00.734) 0:00:09.561 ******** 2025-04-05 12:33:27.689011 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.689022 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.689034 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.689045 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.689056 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.689067 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.689078 | orchestrator | 2025-04-05 12:33:27.689090 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-04-05 12:33:27.689102 | orchestrator | Saturday 05 April 2025 12:22:03 +0000 (0:00:01.034) 0:00:10.596 ******** 2025-04-05 12:33:27.689114 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.689126 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.689138 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.689149 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.689160 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.689172 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.689183 | orchestrator | 2025-04-05 12:33:27.689193 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-04-05 12:33:27.689203 | orchestrator | Saturday 05 April 2025 12:22:04 +0000 (0:00:01.086) 0:00:11.683 ******** 2025-04-05 12:33:27.689219 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.689229 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.689239 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.689250 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.689260 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.689270 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.689281 | orchestrator | 2025-04-05 12:33:27.689383 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-04-05 12:33:27.689398 | orchestrator | Saturday 05 April 2025 12:22:06 +0000 (0:00:01.103) 0:00:12.786 ******** 2025-04-05 12:33:27.689409 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-05 12:33:27.689419 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-05 12:33:27.689429 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-05 12:33:27.689439 | orchestrator | 2025-04-05 12:33:27.689450 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-04-05 12:33:27.689460 | orchestrator | Saturday 05 April 2025 12:22:07 +0000 (0:00:01.377) 0:00:14.164 ******** 2025-04-05 12:33:27.689470 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.689480 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.689490 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.689500 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.689510 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.689520 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.689530 | orchestrator | 2025-04-05 12:33:27.689540 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-04-05 12:33:27.689551 | orchestrator | Saturday 05 April 2025 12:22:09 +0000 (0:00:01.740) 0:00:15.904 ******** 2025-04-05 12:33:27.689561 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-05 12:33:27.689571 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-05 12:33:27.689581 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-05 12:33:27.689591 | orchestrator | 2025-04-05 12:33:27.689601 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-04-05 12:33:27.689611 | orchestrator | Saturday 05 April 2025 12:22:11 +0000 (0:00:02.384) 0:00:18.289 ******** 2025-04-05 12:33:27.689621 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-05 12:33:27.689631 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-05 12:33:27.689641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-05 12:33:27.689651 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.689661 | orchestrator | 2025-04-05 12:33:27.689672 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-04-05 12:33:27.689682 | orchestrator | Saturday 05 April 2025 12:22:11 +0000 (0:00:00.395) 0:00:18.685 ******** 2025-04-05 12:33:27.689692 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-05 12:33:27.689705 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-05 12:33:27.689715 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-05 12:33:27.689725 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.689735 | orchestrator | 2025-04-05 12:33:27.689760 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-04-05 12:33:27.689778 | orchestrator | Saturday 05 April 2025 12:22:12 +0000 (0:00:00.732) 0:00:19.418 ******** 2025-04-05 12:33:27.689789 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-05 12:33:27.689810 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-05 12:33:27.689821 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-05 12:33:27.689831 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.689841 | orchestrator | 2025-04-05 12:33:27.689851 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-04-05 12:33:27.689917 | orchestrator | Saturday 05 April 2025 12:22:12 +0000 (0:00:00.247) 0:00:19.666 ******** 2025-04-05 12:33:27.689934 | orchestrator | skipping: [testbed-node-3] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-04-05 12:22:09.800849', 'end': '2025-04-05 12:22:10.000575', 'delta': '0:00:00.199726', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-05 12:33:27.689950 | orchestrator | skipping: [testbed-node-3] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-04-05 12:22:10.533023', 'end': '2025-04-05 12:22:10.743872', 'delta': '0:00:00.210849', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-05 12:33:27.689961 | orchestrator | skipping: [testbed-node-3] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-04-05 12:22:11.268739', 'end': '2025-04-05 12:22:11.477071', 'delta': '0:00:00.208332', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-05 12:33:27.689972 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.689990 | orchestrator | 2025-04-05 12:33:27.690001 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-04-05 12:33:27.690011 | orchestrator | Saturday 05 April 2025 12:22:13 +0000 (0:00:00.356) 0:00:20.023 ******** 2025-04-05 12:33:27.690049 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.690060 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.690070 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.690080 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.690089 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.690099 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.690109 | orchestrator | 2025-04-05 12:33:27.690119 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-04-05 12:33:27.690134 | orchestrator | Saturday 05 April 2025 12:22:15 +0000 (0:00:02.050) 0:00:22.073 ******** 2025-04-05 12:33:27.690145 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-05 12:33:27.690155 | orchestrator | 2025-04-05 12:33:27.690165 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-04-05 12:33:27.690175 | orchestrator | Saturday 05 April 2025 12:22:16 +0000 (0:00:00.869) 0:00:22.943 ******** 2025-04-05 12:33:27.690185 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.690195 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.690205 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.690215 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.690225 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.690239 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.690249 | orchestrator | 2025-04-05 12:33:27.690260 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-04-05 12:33:27.690291 | orchestrator | Saturday 05 April 2025 12:22:17 +0000 (0:00:01.079) 0:00:24.023 ******** 2025-04-05 12:33:27.690301 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.690311 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.690321 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.690331 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.690341 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.690351 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.690361 | orchestrator | 2025-04-05 12:33:27.690371 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-05 12:33:27.690381 | orchestrator | Saturday 05 April 2025 12:22:19 +0000 (0:00:01.699) 0:00:25.722 ******** 2025-04-05 12:33:27.690391 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.690401 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.690411 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.690421 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.690431 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.690441 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.690451 | orchestrator | 2025-04-05 12:33:27.690461 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-04-05 12:33:27.690529 | orchestrator | Saturday 05 April 2025 12:22:19 +0000 (0:00:00.701) 0:00:26.423 ******** 2025-04-05 12:33:27.690546 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.690558 | orchestrator | 2025-04-05 12:33:27.690569 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-04-05 12:33:27.690580 | orchestrator | Saturday 05 April 2025 12:22:19 +0000 (0:00:00.081) 0:00:26.505 ******** 2025-04-05 12:33:27.690591 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.690602 | orchestrator | 2025-04-05 12:33:27.690613 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-05 12:33:27.690624 | orchestrator | Saturday 05 April 2025 12:22:20 +0000 (0:00:00.496) 0:00:27.001 ******** 2025-04-05 12:33:27.690635 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.690646 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.690657 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.690668 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.690686 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.690697 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.690708 | orchestrator | 2025-04-05 12:33:27.690719 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-04-05 12:33:27.690730 | orchestrator | Saturday 05 April 2025 12:22:21 +0000 (0:00:00.889) 0:00:27.891 ******** 2025-04-05 12:33:27.690741 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.690778 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.690789 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.690799 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.690809 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.690819 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.690828 | orchestrator | 2025-04-05 12:33:27.690838 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-04-05 12:33:27.690848 | orchestrator | Saturday 05 April 2025 12:22:22 +0000 (0:00:00.877) 0:00:28.769 ******** 2025-04-05 12:33:27.690858 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.690868 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.690878 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.690888 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.690898 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.690908 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.690918 | orchestrator | 2025-04-05 12:33:27.690928 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-04-05 12:33:27.690938 | orchestrator | Saturday 05 April 2025 12:22:22 +0000 (0:00:00.820) 0:00:29.590 ******** 2025-04-05 12:33:27.690948 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.690958 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.690968 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.690978 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.690988 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.690998 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.691008 | orchestrator | 2025-04-05 12:33:27.691019 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-04-05 12:33:27.691029 | orchestrator | Saturday 05 April 2025 12:22:23 +0000 (0:00:01.068) 0:00:30.658 ******** 2025-04-05 12:33:27.691039 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.691049 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.691059 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.691069 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.691079 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.691089 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.691099 | orchestrator | 2025-04-05 12:33:27.691110 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-04-05 12:33:27.691120 | orchestrator | Saturday 05 April 2025 12:22:24 +0000 (0:00:00.601) 0:00:31.260 ******** 2025-04-05 12:33:27.691130 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.691140 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.691150 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.691162 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.691173 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.691185 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.691201 | orchestrator | 2025-04-05 12:33:27.691213 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-04-05 12:33:27.691224 | orchestrator | Saturday 05 April 2025 12:22:25 +0000 (0:00:00.688) 0:00:31.949 ******** 2025-04-05 12:33:27.691235 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.691246 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.691257 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.691268 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.691280 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.691291 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.691302 | orchestrator | 2025-04-05 12:33:27.691313 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-04-05 12:33:27.691329 | orchestrator | Saturday 05 April 2025 12:22:26 +0000 (0:00:00.823) 0:00:32.772 ******** 2025-04-05 12:33:27.691342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ad0d437a--29fb--56b5--bf7c--f26bd837f294-osd--block--ad0d437a--29fb--56b5--bf7c--f26bd837f294', 'dm-uuid-LVM-9ZdkthWXVB6K3Rmf2WfQnBTk4e9Oc36kc238xngOyUJFcgJs2g5MZoa4Lbz3mwoF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4ecef128--47ae--5e8f--9b67--b09b9dbd9f26-osd--block--4ecef128--47ae--5e8f--9b67--b09b9dbd9f26', 'dm-uuid-LVM-O2OjUdnL7tVfem3dUvez9g72jq9uzkpPOzIgKKXfa1U0LwH2tyULlIbin9e9eTGE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691473 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eb474160--46dc--5c48--a12b--143126b3371a-osd--block--eb474160--46dc--5c48--a12b--143126b3371a', 'dm-uuid-LVM-7HDYOGMyP8dxtEsSvrd50kzn6zwf4y4nLiday3eiDhW1FE1LBnmZ2FcrZgrqYPJF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bddbd264--0785--5bf3--9ea2--553c515bd099-osd--block--bddbd264--0785--5bf3--9ea2--553c515bd099', 'dm-uuid-LVM-wXyo7BPVXJEbgjsoz8QBe2jweYiasOx7UfIeso1riU79qhOPju7RRnDlDOHFwIKP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04', 'scsi-SQEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04-part1', 'scsi-SQEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04-part14', 'scsi-SQEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04-part15', 'scsi-SQEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04-part16', 'scsi-SQEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.691673 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ad0d437a--29fb--56b5--bf7c--f26bd837f294-osd--block--ad0d437a--29fb--56b5--bf7c--f26bd837f294'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Bikryc-bkfS-JHcM-Fr5U-w3Rx-RKjF-C9Cneo', 'scsi-0QEMU_QEMU_HARDDISK_4656da48-57a2-4eb8-982a-d76718d1cb02', 'scsi-SQEMU_QEMU_HARDDISK_4656da48-57a2-4eb8-982a-d76718d1cb02'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.691824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4ecef128--47ae--5e8f--9b67--b09b9dbd9f26-osd--block--4ecef128--47ae--5e8f--9b67--b09b9dbd9f26'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-efD1dB-Y43d-aAs1-aK8m-F1ij-mW5L-dkSqkj', 'scsi-0QEMU_QEMU_HARDDISK_213baff1-89a7-4ff7-8a44-f121feb76d26', 'scsi-SQEMU_QEMU_HARDDISK_213baff1-89a7-4ff7-8a44-f121feb76d26'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.691846 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff9999ad-bea3-493e-9af1-c705049c2ab2', 'scsi-SQEMU_QEMU_HARDDISK_ff9999ad-bea3-493e-9af1-c705049c2ab2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.691877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-05-11-40-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.691961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4aac11a6--844c--526d--9ac8--c50cbafa4162-osd--block--4aac11a6--844c--526d--9ac8--c50cbafa4162', 'dm-uuid-LVM-dZawb3y1Hz1eMnyCpwqDT5tztIuIALPyI0eZJi1cB8OJ2LbGpLSdCGz3xQOB4NOM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.691993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03', 'scsi-SQEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03-part1', 'scsi-SQEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03-part14', 'scsi-SQEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03-part15', 'scsi-SQEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03-part16', 'scsi-SQEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.692093 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b2d6610--beab--5485--bcb7--dfee77450e0c-osd--block--7b2d6610--beab--5485--bcb7--dfee77450e0c', 'dm-uuid-LVM-hizAQ83Not4iqaZEez7Dtk8reUvUJykOQMS3puzKQqAPOViDD6XBQSPE0X2FbHH2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692109 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.692120 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--eb474160--46dc--5c48--a12b--143126b3371a-osd--block--eb474160--46dc--5c48--a12b--143126b3371a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NFMzs0-uHDl-wQGu-jf8c-QY8l-0ieC-AMxQZc', 'scsi-0QEMU_QEMU_HARDDISK_5d2b1a52-3655-4f66-b4c6-42f0360176a6', 'scsi-SQEMU_QEMU_HARDDISK_5d2b1a52-3655-4f66-b4c6-42f0360176a6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.692141 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--bddbd264--0785--5bf3--9ea2--553c515bd099-osd--block--bddbd264--0785--5bf3--9ea2--553c515bd099'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4VJK7G-rqjJ-LOuM-gOcG-I4bi-wxKo-8lYNZZ', 'scsi-0QEMU_QEMU_HARDDISK_ba8d5f0c-914f-4739-9d89-312c5c9b23ff', 'scsi-SQEMU_QEMU_HARDDISK_ba8d5f0c-914f-4739-9d89-312c5c9b23ff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.692161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfed707b-504f-4ce7-a138-034721a1d783', 'scsi-SQEMU_QEMU_HARDDISK_cfed707b-504f-4ce7-a138-034721a1d783'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.692183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-05-11-40-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.692243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46a255ed-4eac-498b-800b-e13e0459e3b2', 'scsi-SQEMU_QEMU_HARDDISK_46a255ed-4eac-498b-800b-e13e0459e3b2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46a255ed-4eac-498b-800b-e13e0459e3b2-part1', 'scsi-SQEMU_QEMU_HARDDISK_46a255ed-4eac-498b-800b-e13e0459e3b2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46a255ed-4eac-498b-800b-e13e0459e3b2-part14', 'scsi-SQEMU_QEMU_HARDDISK_46a255ed-4eac-498b-800b-e13e0459e3b2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46a255ed-4eac-498b-800b-e13e0459e3b2-part15', 'scsi-SQEMU_QEMU_HARDDISK_46a255ed-4eac-498b-800b-e13e0459e3b2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_46a255ed-4eac-498b-800b-e13e0459e3b2-part16', 'scsi-SQEMU_QEMU_HARDDISK_46a255ed-4eac-498b-800b-e13e0459e3b2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.692496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e7610b8-96df-421c-b96f-4d1684d93a4c', 'scsi-SQEMU_QEMU_HARDDISK_3e7610b8-96df-421c-b96f-4d1684d93a4c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.692558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d464d14f-d012-4c2b-ad7f-7584e12a8ff6', 'scsi-SQEMU_QEMU_HARDDISK_d464d14f-d012-4c2b-ad7f-7584e12a8ff6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.692573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_04fc0b0c-cc3e-463d-bd93-2065fe130691', 'scsi-SQEMU_QEMU_HARDDISK_04fc0b0c-cc3e-463d-bd93-2065fe130691'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.692584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-05-11-40-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.692600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fafe3624-8e2d-43c3-8528-5f1430e0c7df', 'scsi-SQEMU_QEMU_HARDDISK_fafe3624-8e2d-43c3-8528-5f1430e0c7df'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fafe3624-8e2d-43c3-8528-5f1430e0c7df-part1', 'scsi-SQEMU_QEMU_HARDDISK_fafe3624-8e2d-43c3-8528-5f1430e0c7df-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fafe3624-8e2d-43c3-8528-5f1430e0c7df-part14', 'scsi-SQEMU_QEMU_HARDDISK_fafe3624-8e2d-43c3-8528-5f1430e0c7df-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fafe3624-8e2d-43c3-8528-5f1430e0c7df-part15', 'scsi-SQEMU_QEMU_HARDDISK_fafe3624-8e2d-43c3-8528-5f1430e0c7df-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fafe3624-8e2d-43c3-8528-5f1430e0c7df-part16', 'scsi-SQEMU_QEMU_HARDDISK_fafe3624-8e2d-43c3-8528-5f1430e0c7df-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.692803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05f1e5c2-483d-4605-9e0a-4b755f2c5af8', 'scsi-SQEMU_QEMU_HARDDISK_05f1e5c2-483d-4605-9e0a-4b755f2c5af8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.692870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2883817b-319c-4609-b3d8-ef6d07bb9413', 'scsi-SQEMU_QEMU_HARDDISK_2883817b-319c-4609-b3d8-ef6d07bb9413'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.692894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1530be44-7738-4993-8ddf-f82dde1dd101', 'scsi-SQEMU_QEMU_HARDDISK_1530be44-7738-4993-8ddf-f82dde1dd101'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.692906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-05-11-40-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.692923 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.692933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.692944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f', 'scsi-SQEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f-part1', 'scsi-SQEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f-part14', 'scsi-SQEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f-part15', 'scsi-SQEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f-part16', 'scsi-SQEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.692962 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.693025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4aac11a6--844c--526d--9ac8--c50cbafa4162-osd--block--4aac11a6--844c--526d--9ac8--c50cbafa4162'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-t0385u-0YCw-OTnA-TMW3-Jmvo-qOah-ALCvFl', 'scsi-0QEMU_QEMU_HARDDISK_3319eb17-1f94-4384-b4eb-d4656240927c', 'scsi-SQEMU_QEMU_HARDDISK_3319eb17-1f94-4384-b4eb-d4656240927c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.693039 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.693051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7b2d6610--beab--5485--bcb7--dfee77450e0c-osd--block--7b2d6610--beab--5485--bcb7--dfee77450e0c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zswFWI-bIbY-I3Br-690K-1GYU-iHSa-sg7cSi', 'scsi-0QEMU_QEMU_HARDDISK_1b7be43a-8a0c-4734-8b26-2b6a058e961f', 'scsi-SQEMU_QEMU_HARDDISK_1b7be43a-8a0c-4734-8b26-2b6a058e961f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.693067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af9ec2c6-8790-4d7b-8704-1ac1d2bb5c9f', 'scsi-SQEMU_QEMU_HARDDISK_af9ec2c6-8790-4d7b-8704-1ac1d2bb5c9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.693078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-05-11-40-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.693089 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.693099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.693109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.693120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.693179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.693198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.693209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.693228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.693238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:33:27.693249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df06a38-89f5-41f7-80a7-38daa8b90597', 'scsi-SQEMU_QEMU_HARDDISK_6df06a38-89f5-41f7-80a7-38daa8b90597'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df06a38-89f5-41f7-80a7-38daa8b90597-part1', 'scsi-SQEMU_QEMU_HARDDISK_6df06a38-89f5-41f7-80a7-38daa8b90597-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df06a38-89f5-41f7-80a7-38daa8b90597-part14', 'scsi-SQEMU_QEMU_HARDDISK_6df06a38-89f5-41f7-80a7-38daa8b90597-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df06a38-89f5-41f7-80a7-38daa8b90597-part15', 'scsi-SQEMU_QEMU_HARDDISK_6df06a38-89f5-41f7-80a7-38daa8b90597-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6df06a38-89f5-41f7-80a7-38daa8b90597-part16', 'scsi-SQEMU_QEMU_HARDDISK_6df06a38-89f5-41f7-80a7-38daa8b90597-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.693321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08ad3194-03e6-46c2-bf31-80971387f831', 'scsi-SQEMU_QEMU_HARDDISK_08ad3194-03e6-46c2-bf31-80971387f831'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.693337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d933188b-11f3-4ea5-a96e-67e7dafb4be4', 'scsi-SQEMU_QEMU_HARDDISK_d933188b-11f3-4ea5-a96e-67e7dafb4be4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.693466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef34b75a-3d34-4e78-9c2f-2912cb587233', 'scsi-SQEMU_QEMU_HARDDISK_ef34b75a-3d34-4e78-9c2f-2912cb587233'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.693482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-05-11-40-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:33:27.693492 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.693503 | orchestrator | 2025-04-05 12:33:27.693513 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-04-05 12:33:27.693524 | orchestrator | Saturday 05 April 2025 12:22:27 +0000 (0:00:01.774) 0:00:34.547 ******** 2025-04-05 12:33:27.693534 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-05 12:33:27.693544 | orchestrator | 2025-04-05 12:33:27.693555 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-04-05 12:33:27.693565 | orchestrator | Saturday 05 April 2025 12:22:29 +0000 (0:00:01.587) 0:00:36.134 ******** 2025-04-05 12:33:27.693575 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.693585 | orchestrator | 2025-04-05 12:33:27.693596 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-04-05 12:33:27.693606 | orchestrator | Saturday 05 April 2025 12:22:29 +0000 (0:00:00.135) 0:00:36.270 ******** 2025-04-05 12:33:27.693616 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.693626 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.693637 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.693647 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.693657 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.693667 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.693677 | orchestrator | 2025-04-05 12:33:27.693687 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-04-05 12:33:27.693697 | orchestrator | Saturday 05 April 2025 12:22:30 +0000 (0:00:00.869) 0:00:37.139 ******** 2025-04-05 12:33:27.693707 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.693717 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.693728 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.693738 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.693797 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.693809 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.693819 | orchestrator | 2025-04-05 12:33:27.693829 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-04-05 12:33:27.693839 | orchestrator | Saturday 05 April 2025 12:22:32 +0000 (0:00:01.684) 0:00:38.824 ******** 2025-04-05 12:33:27.693856 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.693866 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.693876 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.693886 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.693896 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.693906 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.693916 | orchestrator | 2025-04-05 12:33:27.693926 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-05 12:33:27.693936 | orchestrator | Saturday 05 April 2025 12:22:32 +0000 (0:00:00.802) 0:00:39.626 ******** 2025-04-05 12:33:27.693946 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.694058 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.694077 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.694090 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.694102 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.694114 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.694126 | orchestrator | 2025-04-05 12:33:27.694138 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-05 12:33:27.694151 | orchestrator | Saturday 05 April 2025 12:22:33 +0000 (0:00:01.020) 0:00:40.646 ******** 2025-04-05 12:33:27.694163 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.694175 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.694187 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.694199 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.694211 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.694222 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.694234 | orchestrator | 2025-04-05 12:33:27.694246 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-05 12:33:27.694258 | orchestrator | Saturday 05 April 2025 12:22:34 +0000 (0:00:00.531) 0:00:41.178 ******** 2025-04-05 12:33:27.694270 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.694282 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.694294 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.694305 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.694331 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.694343 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.694353 | orchestrator | 2025-04-05 12:33:27.694363 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-05 12:33:27.694374 | orchestrator | Saturday 05 April 2025 12:22:35 +0000 (0:00:00.797) 0:00:41.976 ******** 2025-04-05 12:33:27.694384 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.694407 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.694418 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.694437 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.694448 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.694459 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.694469 | orchestrator | 2025-04-05 12:33:27.694479 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-04-05 12:33:27.694489 | orchestrator | Saturday 05 April 2025 12:22:36 +0000 (0:00:00.931) 0:00:42.908 ******** 2025-04-05 12:33:27.694500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-05 12:33:27.694514 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-05 12:33:27.694525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-05 12:33:27.694535 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-05 12:33:27.694545 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-05 12:33:27.694555 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.694565 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-05 12:33:27.694576 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-05 12:33:27.694586 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.694596 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-05 12:33:27.694613 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-05 12:33:27.694623 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-05 12:33:27.694633 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-05 12:33:27.694643 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.694654 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-05 12:33:27.694664 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-05 12:33:27.694674 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-05 12:33:27.694684 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.694695 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-05 12:33:27.694705 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-05 12:33:27.694715 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-05 12:33:27.694725 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.694736 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-05 12:33:27.694760 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.694771 | orchestrator | 2025-04-05 12:33:27.694782 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-04-05 12:33:27.694792 | orchestrator | Saturday 05 April 2025 12:22:38 +0000 (0:00:02.261) 0:00:45.170 ******** 2025-04-05 12:33:27.694802 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-05 12:33:27.694812 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-05 12:33:27.694822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-05 12:33:27.694832 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-05 12:33:27.694842 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-05 12:33:27.694852 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-05 12:33:27.694866 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.694876 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-05 12:33:27.694886 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-05 12:33:27.694897 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-05 12:33:27.694907 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.694917 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.694927 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-05 12:33:27.694937 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-05 12:33:27.694947 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-05 12:33:27.694957 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-05 12:33:27.694967 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-05 12:33:27.694977 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.695049 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-05 12:33:27.695064 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-05 12:33:27.695074 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-05 12:33:27.695084 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.695094 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-05 12:33:27.695104 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.695114 | orchestrator | 2025-04-05 12:33:27.695124 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-04-05 12:33:27.695134 | orchestrator | Saturday 05 April 2025 12:22:39 +0000 (0:00:01.310) 0:00:46.481 ******** 2025-04-05 12:33:27.695144 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-04-05 12:33:27.695154 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-04-05 12:33:27.695164 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-04-05 12:33:27.695179 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-04-05 12:33:27.695197 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-04-05 12:33:27.695208 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-05 12:33:27.695219 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-04-05 12:33:27.695230 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-04-05 12:33:27.695241 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-04-05 12:33:27.695252 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-04-05 12:33:27.695263 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-04-05 12:33:27.695274 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-04-05 12:33:27.695285 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-04-05 12:33:27.695296 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-04-05 12:33:27.695307 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-04-05 12:33:27.695318 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-04-05 12:33:27.695329 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-04-05 12:33:27.695340 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-04-05 12:33:27.695351 | orchestrator | 2025-04-05 12:33:27.695362 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-04-05 12:33:27.695373 | orchestrator | Saturday 05 April 2025 12:22:44 +0000 (0:00:04.368) 0:00:50.849 ******** 2025-04-05 12:33:27.695384 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-05 12:33:27.695396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-05 12:33:27.695410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-05 12:33:27.695422 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-05 12:33:27.695433 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.695444 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-05 12:33:27.695455 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-05 12:33:27.695466 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-05 12:33:27.695477 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-05 12:33:27.695488 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-05 12:33:27.695499 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.695510 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-05 12:33:27.695521 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-05 12:33:27.695532 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-05 12:33:27.695543 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.695554 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-05 12:33:27.695566 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-05 12:33:27.695577 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.695588 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-05 12:33:27.695599 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.695610 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-05 12:33:27.695621 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-05 12:33:27.695632 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-05 12:33:27.695643 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.695654 | orchestrator | 2025-04-05 12:33:27.695665 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-04-05 12:33:27.695676 | orchestrator | Saturday 05 April 2025 12:22:45 +0000 (0:00:01.405) 0:00:52.255 ******** 2025-04-05 12:33:27.695688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-05 12:33:27.695701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-05 12:33:27.695713 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-05 12:33:27.695733 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.695761 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-05 12:33:27.695773 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-05 12:33:27.695785 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-05 12:33:27.695796 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-05 12:33:27.695807 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-05 12:33:27.695819 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-05 12:33:27.695830 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.695841 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-05 12:33:27.695853 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-05 12:33:27.695864 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.695930 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-05 12:33:27.695946 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-05 12:33:27.695958 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.695969 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-05 12:33:27.695980 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-05 12:33:27.695990 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-05 12:33:27.696001 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.696012 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-05 12:33:27.696022 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-05 12:33:27.696033 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.696044 | orchestrator | 2025-04-05 12:33:27.696055 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-04-05 12:33:27.696065 | orchestrator | Saturday 05 April 2025 12:22:46 +0000 (0:00:01.441) 0:00:53.696 ******** 2025-04-05 12:33:27.696076 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-05 12:33:27.696087 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-05 12:33:27.696098 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-05 12:33:27.696109 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.696120 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-05 12:33:27.696131 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-05 12:33:27.696142 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-05 12:33:27.696152 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.696163 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-05 12:33:27.696174 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-05 12:33:27.696185 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-05 12:33:27.696195 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.696206 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-04-05 12:33:27.696217 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-05 12:33:27.696228 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-05 12:33:27.696252 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-05 12:33:27.696264 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-04-05 12:33:27.696274 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-05 12:33:27.696291 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-05 12:33:27.696301 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-05 12:33:27.696311 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-04-05 12:33:27.696321 | orchestrator | 2025-04-05 12:33:27.696332 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-04-05 12:33:27.696342 | orchestrator | Saturday 05 April 2025 12:22:48 +0000 (0:00:01.516) 0:00:55.212 ******** 2025-04-05 12:33:27.696352 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.696362 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.696372 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.696382 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.696393 | orchestrator | 2025-04-05 12:33:27.696403 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-05 12:33:27.696413 | orchestrator | Saturday 05 April 2025 12:22:49 +0000 (0:00:01.087) 0:00:56.300 ******** 2025-04-05 12:33:27.696423 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.696433 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.696444 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.696454 | orchestrator | 2025-04-05 12:33:27.696469 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-05 12:33:27.696479 | orchestrator | Saturday 05 April 2025 12:22:50 +0000 (0:00:00.609) 0:00:56.909 ******** 2025-04-05 12:33:27.696489 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.696499 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.696509 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.696519 | orchestrator | 2025-04-05 12:33:27.696529 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-05 12:33:27.696539 | orchestrator | Saturday 05 April 2025 12:22:50 +0000 (0:00:00.438) 0:00:57.347 ******** 2025-04-05 12:33:27.696549 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.696560 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.696570 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.696580 | orchestrator | 2025-04-05 12:33:27.696590 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-05 12:33:27.696600 | orchestrator | Saturday 05 April 2025 12:22:51 +0000 (0:00:00.480) 0:00:57.828 ******** 2025-04-05 12:33:27.696610 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.696674 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.696690 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.696701 | orchestrator | 2025-04-05 12:33:27.696712 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-05 12:33:27.696723 | orchestrator | Saturday 05 April 2025 12:22:51 +0000 (0:00:00.442) 0:00:58.271 ******** 2025-04-05 12:33:27.696734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.696785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.696802 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.696812 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.696823 | orchestrator | 2025-04-05 12:33:27.696833 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-05 12:33:27.696843 | orchestrator | Saturday 05 April 2025 12:22:52 +0000 (0:00:00.536) 0:00:58.807 ******** 2025-04-05 12:33:27.696854 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.696864 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.696874 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.696884 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.696902 | orchestrator | 2025-04-05 12:33:27.696912 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-05 12:33:27.696922 | orchestrator | Saturday 05 April 2025 12:22:52 +0000 (0:00:00.697) 0:00:59.505 ******** 2025-04-05 12:33:27.696932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.696942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.696952 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.696962 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.696972 | orchestrator | 2025-04-05 12:33:27.696982 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:33:27.696992 | orchestrator | Saturday 05 April 2025 12:22:53 +0000 (0:00:00.504) 0:01:00.009 ******** 2025-04-05 12:33:27.697002 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.697012 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.697027 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.697037 | orchestrator | 2025-04-05 12:33:27.697047 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-05 12:33:27.697057 | orchestrator | Saturday 05 April 2025 12:22:53 +0000 (0:00:00.624) 0:01:00.634 ******** 2025-04-05 12:33:27.697067 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-05 12:33:27.697077 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-04-05 12:33:27.697087 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-04-05 12:33:27.697097 | orchestrator | 2025-04-05 12:33:27.697107 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-05 12:33:27.697117 | orchestrator | Saturday 05 April 2025 12:22:54 +0000 (0:00:00.805) 0:01:01.439 ******** 2025-04-05 12:33:27.697127 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.697137 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.697147 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.697157 | orchestrator | 2025-04-05 12:33:27.697167 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:33:27.697177 | orchestrator | Saturday 05 April 2025 12:22:55 +0000 (0:00:00.377) 0:01:01.817 ******** 2025-04-05 12:33:27.697187 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.697198 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.697208 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.697218 | orchestrator | 2025-04-05 12:33:27.697228 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-05 12:33:27.697238 | orchestrator | Saturday 05 April 2025 12:22:55 +0000 (0:00:00.570) 0:01:02.388 ******** 2025-04-05 12:33:27.697248 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-05 12:33:27.697258 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.697268 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-05 12:33:27.697278 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.697289 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-05 12:33:27.697299 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.697310 | orchestrator | 2025-04-05 12:33:27.697321 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-05 12:33:27.697333 | orchestrator | Saturday 05 April 2025 12:22:56 +0000 (0:00:00.560) 0:01:02.949 ******** 2025-04-05 12:33:27.697344 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.697355 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.697367 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.697378 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.697393 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.697405 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.697422 | orchestrator | 2025-04-05 12:33:27.697434 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-05 12:33:27.697445 | orchestrator | Saturday 05 April 2025 12:22:56 +0000 (0:00:00.616) 0:01:03.566 ******** 2025-04-05 12:33:27.697457 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.697468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.697480 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-05 12:33:27.697491 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.697502 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-05 12:33:27.697514 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-05 12:33:27.697525 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-05 12:33:27.697594 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.697611 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-05 12:33:27.697623 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.697636 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-05 12:33:27.697648 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.697660 | orchestrator | 2025-04-05 12:33:27.697672 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-04-05 12:33:27.697683 | orchestrator | Saturday 05 April 2025 12:22:57 +0000 (0:00:00.554) 0:01:04.120 ******** 2025-04-05 12:33:27.697694 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.697705 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.697716 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.697726 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.697737 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.697763 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.697775 | orchestrator | 2025-04-05 12:33:27.697786 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-04-05 12:33:27.697797 | orchestrator | Saturday 05 April 2025 12:22:58 +0000 (0:00:00.707) 0:01:04.827 ******** 2025-04-05 12:33:27.697807 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-05 12:33:27.697818 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-05 12:33:27.697834 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-05 12:33:27.697845 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-04-05 12:33:27.697856 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-05 12:33:27.697867 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-05 12:33:27.697878 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-05 12:33:27.697889 | orchestrator | 2025-04-05 12:33:27.697900 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-04-05 12:33:27.697911 | orchestrator | Saturday 05 April 2025 12:22:58 +0000 (0:00:00.763) 0:01:05.591 ******** 2025-04-05 12:33:27.697921 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-05 12:33:27.697932 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-05 12:33:27.697943 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-05 12:33:27.697953 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-04-05 12:33:27.697964 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-05 12:33:27.697975 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-05 12:33:27.697986 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-05 12:33:27.698009 | orchestrator | 2025-04-05 12:33:27.698040 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-05 12:33:27.698058 | orchestrator | Saturday 05 April 2025 12:23:00 +0000 (0:00:01.644) 0:01:07.235 ******** 2025-04-05 12:33:27.698068 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.698079 | orchestrator | 2025-04-05 12:33:27.698090 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-05 12:33:27.698100 | orchestrator | Saturday 05 April 2025 12:23:01 +0000 (0:00:01.101) 0:01:08.337 ******** 2025-04-05 12:33:27.698110 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.698120 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.698130 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.698140 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.698150 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.698160 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.698170 | orchestrator | 2025-04-05 12:33:27.698181 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-05 12:33:27.698191 | orchestrator | Saturday 05 April 2025 12:23:02 +0000 (0:00:01.044) 0:01:09.381 ******** 2025-04-05 12:33:27.698201 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.698211 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.698221 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.698232 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.698242 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.698252 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.698263 | orchestrator | 2025-04-05 12:33:27.698273 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-05 12:33:27.698283 | orchestrator | Saturday 05 April 2025 12:23:03 +0000 (0:00:00.814) 0:01:10.196 ******** 2025-04-05 12:33:27.698294 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.698304 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.698314 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.698324 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.698334 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.698344 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.698354 | orchestrator | 2025-04-05 12:33:27.698364 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-05 12:33:27.698374 | orchestrator | Saturday 05 April 2025 12:23:04 +0000 (0:00:00.778) 0:01:10.974 ******** 2025-04-05 12:33:27.698384 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.698398 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.698414 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.698431 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.698447 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.698463 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.698480 | orchestrator | 2025-04-05 12:33:27.698490 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-05 12:33:27.698566 | orchestrator | Saturday 05 April 2025 12:23:04 +0000 (0:00:00.719) 0:01:11.694 ******** 2025-04-05 12:33:27.698580 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.698591 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.698608 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.698618 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.698628 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.698639 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.698649 | orchestrator | 2025-04-05 12:33:27.698659 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-05 12:33:27.698670 | orchestrator | Saturday 05 April 2025 12:23:06 +0000 (0:00:01.263) 0:01:12.957 ******** 2025-04-05 12:33:27.698680 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.698690 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.698700 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.698710 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.698728 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.698738 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.698765 | orchestrator | 2025-04-05 12:33:27.698776 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-05 12:33:27.698786 | orchestrator | Saturday 05 April 2025 12:23:06 +0000 (0:00:00.678) 0:01:13.636 ******** 2025-04-05 12:33:27.698796 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.698806 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.698816 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.698826 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.698836 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.698846 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.698856 | orchestrator | 2025-04-05 12:33:27.698867 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-05 12:33:27.698877 | orchestrator | Saturday 05 April 2025 12:23:07 +0000 (0:00:00.692) 0:01:14.329 ******** 2025-04-05 12:33:27.698887 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.698897 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.698907 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.698917 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.698927 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.698937 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.698947 | orchestrator | 2025-04-05 12:33:27.698958 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-05 12:33:27.698968 | orchestrator | Saturday 05 April 2025 12:23:08 +0000 (0:00:00.670) 0:01:15.000 ******** 2025-04-05 12:33:27.698978 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.698987 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.698997 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.699007 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.699017 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.699027 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.699037 | orchestrator | 2025-04-05 12:33:27.699051 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-05 12:33:27.699061 | orchestrator | Saturday 05 April 2025 12:23:08 +0000 (0:00:00.696) 0:01:15.696 ******** 2025-04-05 12:33:27.699071 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.699081 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.699091 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.699101 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.699111 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.699121 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.699131 | orchestrator | 2025-04-05 12:33:27.699141 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-05 12:33:27.699151 | orchestrator | Saturday 05 April 2025 12:23:09 +0000 (0:00:00.694) 0:01:16.390 ******** 2025-04-05 12:33:27.699162 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.699172 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.699182 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.699193 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.699204 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.699215 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.699227 | orchestrator | 2025-04-05 12:33:27.699239 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-05 12:33:27.699250 | orchestrator | Saturday 05 April 2025 12:23:10 +0000 (0:00:01.069) 0:01:17.460 ******** 2025-04-05 12:33:27.699262 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.699273 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.699284 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.699295 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.699307 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.699318 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.699330 | orchestrator | 2025-04-05 12:33:27.699341 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-05 12:33:27.699362 | orchestrator | Saturday 05 April 2025 12:23:11 +0000 (0:00:00.724) 0:01:18.184 ******** 2025-04-05 12:33:27.699374 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.699385 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.699397 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.699408 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.699419 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.699435 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.699447 | orchestrator | 2025-04-05 12:33:27.699458 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-05 12:33:27.699470 | orchestrator | Saturday 05 April 2025 12:23:12 +0000 (0:00:01.414) 0:01:19.598 ******** 2025-04-05 12:33:27.699481 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.699493 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.699504 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.699516 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.699528 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.699539 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.699551 | orchestrator | 2025-04-05 12:33:27.699561 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-05 12:33:27.699571 | orchestrator | Saturday 05 April 2025 12:23:13 +0000 (0:00:00.632) 0:01:20.231 ******** 2025-04-05 12:33:27.699581 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.699592 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.699602 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.699612 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.699622 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.699632 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.699642 | orchestrator | 2025-04-05 12:33:27.699707 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-05 12:33:27.699721 | orchestrator | Saturday 05 April 2025 12:23:14 +0000 (0:00:00.638) 0:01:20.869 ******** 2025-04-05 12:33:27.699732 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.699742 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.699790 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.699801 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.699811 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.699822 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.699832 | orchestrator | 2025-04-05 12:33:27.699842 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-05 12:33:27.699853 | orchestrator | Saturday 05 April 2025 12:23:14 +0000 (0:00:00.524) 0:01:21.394 ******** 2025-04-05 12:33:27.699863 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.699873 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.699883 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.699894 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.699904 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.699914 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.699925 | orchestrator | 2025-04-05 12:33:27.699935 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-05 12:33:27.699945 | orchestrator | Saturday 05 April 2025 12:23:15 +0000 (0:00:00.642) 0:01:22.037 ******** 2025-04-05 12:33:27.699955 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.699965 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.699975 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.699985 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.699995 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.700006 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.700016 | orchestrator | 2025-04-05 12:33:27.700026 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-05 12:33:27.700036 | orchestrator | Saturday 05 April 2025 12:23:15 +0000 (0:00:00.562) 0:01:22.599 ******** 2025-04-05 12:33:27.700046 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.700056 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.700076 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.700102 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.700112 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.700123 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.700133 | orchestrator | 2025-04-05 12:33:27.700143 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-05 12:33:27.700153 | orchestrator | Saturday 05 April 2025 12:23:16 +0000 (0:00:00.772) 0:01:23.372 ******** 2025-04-05 12:33:27.700163 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.700173 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.700183 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.700194 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.700203 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.700213 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.700223 | orchestrator | 2025-04-05 12:33:27.700234 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-05 12:33:27.700244 | orchestrator | Saturday 05 April 2025 12:23:17 +0000 (0:00:00.604) 0:01:23.976 ******** 2025-04-05 12:33:27.700255 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.700265 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.700275 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.700285 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.700295 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.700306 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.700314 | orchestrator | 2025-04-05 12:33:27.700323 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-05 12:33:27.700332 | orchestrator | Saturday 05 April 2025 12:23:18 +0000 (0:00:00.744) 0:01:24.720 ******** 2025-04-05 12:33:27.700342 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.700351 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.700361 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.700376 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.700387 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.700396 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.700406 | orchestrator | 2025-04-05 12:33:27.700415 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-05 12:33:27.700425 | orchestrator | Saturday 05 April 2025 12:23:18 +0000 (0:00:00.510) 0:01:25.231 ******** 2025-04-05 12:33:27.700435 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.700444 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.700454 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.700463 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.700473 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.700482 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.700491 | orchestrator | 2025-04-05 12:33:27.700501 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-05 12:33:27.700511 | orchestrator | Saturday 05 April 2025 12:23:19 +0000 (0:00:00.669) 0:01:25.901 ******** 2025-04-05 12:33:27.700521 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.700530 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.700540 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.700549 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.700559 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.700569 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.700578 | orchestrator | 2025-04-05 12:33:27.700588 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-05 12:33:27.700597 | orchestrator | Saturday 05 April 2025 12:23:19 +0000 (0:00:00.568) 0:01:26.469 ******** 2025-04-05 12:33:27.700607 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.700617 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.700626 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.700636 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.700645 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.700660 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.700669 | orchestrator | 2025-04-05 12:33:27.700682 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-05 12:33:27.700691 | orchestrator | Saturday 05 April 2025 12:23:20 +0000 (0:00:00.949) 0:01:27.418 ******** 2025-04-05 12:33:27.700700 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.700709 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.700717 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.700789 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.700802 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.700811 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.700819 | orchestrator | 2025-04-05 12:33:27.700828 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-05 12:33:27.700837 | orchestrator | Saturday 05 April 2025 12:23:21 +0000 (0:00:00.615) 0:01:28.034 ******** 2025-04-05 12:33:27.700845 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.700854 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.700862 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.700871 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.700879 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.700887 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.700896 | orchestrator | 2025-04-05 12:33:27.700904 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-05 12:33:27.700913 | orchestrator | Saturday 05 April 2025 12:23:22 +0000 (0:00:00.710) 0:01:28.745 ******** 2025-04-05 12:33:27.700922 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.700930 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.700938 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.700947 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.700955 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.700964 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.700972 | orchestrator | 2025-04-05 12:33:27.700985 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-05 12:33:27.700994 | orchestrator | Saturday 05 April 2025 12:23:22 +0000 (0:00:00.556) 0:01:29.301 ******** 2025-04-05 12:33:27.701003 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.701011 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.701019 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.701028 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.701036 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.701044 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.701053 | orchestrator | 2025-04-05 12:33:27.701061 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-05 12:33:27.701070 | orchestrator | Saturday 05 April 2025 12:23:23 +0000 (0:00:00.634) 0:01:29.936 ******** 2025-04-05 12:33:27.701079 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.701087 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.701096 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.701104 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.701113 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.701125 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.701134 | orchestrator | 2025-04-05 12:33:27.701142 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-05 12:33:27.701151 | orchestrator | Saturday 05 April 2025 12:23:23 +0000 (0:00:00.467) 0:01:30.403 ******** 2025-04-05 12:33:27.701160 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.701168 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.701177 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.701185 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.701194 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.701202 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.701211 | orchestrator | 2025-04-05 12:33:27.701225 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-05 12:33:27.701233 | orchestrator | Saturday 05 April 2025 12:23:24 +0000 (0:00:00.671) 0:01:31.074 ******** 2025-04-05 12:33:27.701242 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.701250 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.701259 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.701267 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.701276 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.701284 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.701293 | orchestrator | 2025-04-05 12:33:27.701301 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-05 12:33:27.701310 | orchestrator | Saturday 05 April 2025 12:23:24 +0000 (0:00:00.549) 0:01:31.623 ******** 2025-04-05 12:33:27.701319 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-05 12:33:27.701327 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-05 12:33:27.701336 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.701344 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-05 12:33:27.701353 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-05 12:33:27.701361 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.701370 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-05 12:33:27.701379 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-05 12:33:27.701387 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.701396 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-05 12:33:27.701404 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-05 12:33:27.701413 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.701421 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-05 12:33:27.701431 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-05 12:33:27.701441 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.701450 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-05 12:33:27.701460 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-05 12:33:27.701469 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.701479 | orchestrator | 2025-04-05 12:33:27.701488 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-05 12:33:27.701498 | orchestrator | Saturday 05 April 2025 12:23:25 +0000 (0:00:00.806) 0:01:32.430 ******** 2025-04-05 12:33:27.701508 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-05 12:33:27.701517 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-05 12:33:27.701527 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.701537 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-05 12:33:27.701546 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-05 12:33:27.701556 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.701611 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-05 12:33:27.701623 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-05 12:33:27.701632 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.701641 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-05 12:33:27.701650 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-05 12:33:27.701658 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.701667 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-05 12:33:27.701675 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-05 12:33:27.701684 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.701692 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-05 12:33:27.701701 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-05 12:33:27.701709 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.701718 | orchestrator | 2025-04-05 12:33:27.701726 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-05 12:33:27.701740 | orchestrator | Saturday 05 April 2025 12:23:26 +0000 (0:00:00.555) 0:01:32.986 ******** 2025-04-05 12:33:27.701764 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.701773 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.701782 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.701790 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.701799 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.701807 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.701816 | orchestrator | 2025-04-05 12:33:27.701825 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-05 12:33:27.701833 | orchestrator | Saturday 05 April 2025 12:23:26 +0000 (0:00:00.652) 0:01:33.638 ******** 2025-04-05 12:33:27.701842 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.701850 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.701858 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.701867 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.701875 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.701884 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.701892 | orchestrator | 2025-04-05 12:33:27.701901 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-05 12:33:27.701910 | orchestrator | Saturday 05 April 2025 12:23:27 +0000 (0:00:00.526) 0:01:34.164 ******** 2025-04-05 12:33:27.701919 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.701927 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.701936 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.701944 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.701953 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.701961 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.701970 | orchestrator | 2025-04-05 12:33:27.701978 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-05 12:33:27.701987 | orchestrator | Saturday 05 April 2025 12:23:28 +0000 (0:00:00.647) 0:01:34.812 ******** 2025-04-05 12:33:27.701995 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.702004 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.702043 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.702052 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.702061 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.702070 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.702078 | orchestrator | 2025-04-05 12:33:27.702087 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-05 12:33:27.702096 | orchestrator | Saturday 05 April 2025 12:23:28 +0000 (0:00:00.655) 0:01:35.468 ******** 2025-04-05 12:33:27.702104 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.702113 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.702126 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.702134 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.702143 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.702151 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.702160 | orchestrator | 2025-04-05 12:33:27.702169 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-05 12:33:27.702177 | orchestrator | Saturday 05 April 2025 12:23:29 +0000 (0:00:00.787) 0:01:36.255 ******** 2025-04-05 12:33:27.702186 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.702194 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.702203 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.702211 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.702220 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.702228 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.702237 | orchestrator | 2025-04-05 12:33:27.702245 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-05 12:33:27.702255 | orchestrator | Saturday 05 April 2025 12:23:30 +0000 (0:00:00.660) 0:01:36.916 ******** 2025-04-05 12:33:27.702270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.702280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.702290 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.702299 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.702309 | orchestrator | 2025-04-05 12:33:27.702318 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-05 12:33:27.702328 | orchestrator | Saturday 05 April 2025 12:23:30 +0000 (0:00:00.552) 0:01:37.468 ******** 2025-04-05 12:33:27.702337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.702347 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.702357 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.702366 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.702376 | orchestrator | 2025-04-05 12:33:27.702385 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-05 12:33:27.702395 | orchestrator | Saturday 05 April 2025 12:23:31 +0000 (0:00:00.634) 0:01:38.103 ******** 2025-04-05 12:33:27.702404 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.702469 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.702486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.702496 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.702505 | orchestrator | 2025-04-05 12:33:27.702514 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:33:27.702523 | orchestrator | Saturday 05 April 2025 12:23:31 +0000 (0:00:00.377) 0:01:38.481 ******** 2025-04-05 12:33:27.702531 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.702540 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.702549 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.702557 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.702566 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.702574 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.702582 | orchestrator | 2025-04-05 12:33:27.702591 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-05 12:33:27.702599 | orchestrator | Saturday 05 April 2025 12:23:32 +0000 (0:00:00.543) 0:01:39.024 ******** 2025-04-05 12:33:27.702608 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-05 12:33:27.702616 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.702625 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-05 12:33:27.702634 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.702642 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-05 12:33:27.702651 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.702659 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-05 12:33:27.702667 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.702676 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-05 12:33:27.702685 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.702693 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-05 12:33:27.702702 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.702710 | orchestrator | 2025-04-05 12:33:27.702719 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-05 12:33:27.702727 | orchestrator | Saturday 05 April 2025 12:23:33 +0000 (0:00:00.839) 0:01:39.863 ******** 2025-04-05 12:33:27.702736 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.702781 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.702791 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.702800 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.702809 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.702818 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.702826 | orchestrator | 2025-04-05 12:33:27.702835 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:33:27.702849 | orchestrator | Saturday 05 April 2025 12:23:33 +0000 (0:00:00.528) 0:01:40.392 ******** 2025-04-05 12:33:27.702858 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.702867 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.702875 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.702884 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.702892 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.702901 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.702910 | orchestrator | 2025-04-05 12:33:27.702918 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-05 12:33:27.702927 | orchestrator | Saturday 05 April 2025 12:23:34 +0000 (0:00:00.657) 0:01:41.049 ******** 2025-04-05 12:33:27.702936 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-05 12:33:27.702945 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.702953 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-05 12:33:27.702962 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.702971 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-05 12:33:27.702979 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.702988 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-05 12:33:27.702997 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.703005 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-05 12:33:27.703014 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.703026 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-05 12:33:27.703035 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.703044 | orchestrator | 2025-04-05 12:33:27.703053 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-05 12:33:27.703061 | orchestrator | Saturday 05 April 2025 12:23:35 +0000 (0:00:00.696) 0:01:41.745 ******** 2025-04-05 12:33:27.703070 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.703079 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.703088 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.703096 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.703105 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.703114 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.703122 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.703131 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.703140 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.703148 | orchestrator | 2025-04-05 12:33:27.703157 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-05 12:33:27.703166 | orchestrator | Saturday 05 April 2025 12:23:35 +0000 (0:00:00.682) 0:01:42.427 ******** 2025-04-05 12:33:27.703174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.703183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.703192 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.703200 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.703209 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-05 12:33:27.703218 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-05 12:33:27.703275 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-05 12:33:27.703286 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.703300 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-05 12:33:27.703308 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-05 12:33:27.703316 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-05 12:33:27.703332 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.703340 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-05 12:33:27.703348 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-05 12:33:27.703356 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-05 12:33:27.703364 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.703372 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-05 12:33:27.703380 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-05 12:33:27.703387 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-05 12:33:27.703395 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.703403 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-05 12:33:27.703411 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-05 12:33:27.703419 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-05 12:33:27.703427 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.703435 | orchestrator | 2025-04-05 12:33:27.703443 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-05 12:33:27.703451 | orchestrator | Saturday 05 April 2025 12:23:37 +0000 (0:00:01.463) 0:01:43.891 ******** 2025-04-05 12:33:27.703458 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.703467 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.703474 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.703482 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.703490 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.703498 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.703506 | orchestrator | 2025-04-05 12:33:27.703514 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-05 12:33:27.703522 | orchestrator | Saturday 05 April 2025 12:23:38 +0000 (0:00:01.092) 0:01:44.983 ******** 2025-04-05 12:33:27.703529 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-05 12:33:27.703537 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.703545 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-05 12:33:27.703553 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.703561 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-05 12:33:27.703569 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.703577 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.703585 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.703604 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.703612 | orchestrator | 2025-04-05 12:33:27.703621 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-05 12:33:27.703629 | orchestrator | Saturday 05 April 2025 12:23:39 +0000 (0:00:01.074) 0:01:46.058 ******** 2025-04-05 12:33:27.703637 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.703645 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.703653 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.703661 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.703669 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.703677 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.703685 | orchestrator | 2025-04-05 12:33:27.703693 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-05 12:33:27.703701 | orchestrator | Saturday 05 April 2025 12:23:40 +0000 (0:00:01.049) 0:01:47.107 ******** 2025-04-05 12:33:27.703709 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.703717 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.703725 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.703733 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.703741 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.703762 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.703770 | orchestrator | 2025-04-05 12:33:27.703778 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-04-05 12:33:27.703791 | orchestrator | Saturday 05 April 2025 12:23:41 +0000 (0:00:01.037) 0:01:48.145 ******** 2025-04-05 12:33:27.703799 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.703807 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.703815 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.703823 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.703830 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.703838 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.703846 | orchestrator | 2025-04-05 12:33:27.703854 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-04-05 12:33:27.703862 | orchestrator | Saturday 05 April 2025 12:23:42 +0000 (0:00:01.250) 0:01:49.395 ******** 2025-04-05 12:33:27.703870 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.703878 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.703886 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.703894 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.703902 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.703910 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.703918 | orchestrator | 2025-04-05 12:33:27.703926 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-04-05 12:33:27.703934 | orchestrator | Saturday 05 April 2025 12:23:45 +0000 (0:00:02.475) 0:01:51.871 ******** 2025-04-05 12:33:27.703942 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.703951 | orchestrator | 2025-04-05 12:33:27.703959 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-04-05 12:33:27.703967 | orchestrator | Saturday 05 April 2025 12:23:46 +0000 (0:00:01.257) 0:01:53.128 ******** 2025-04-05 12:33:27.703974 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.703982 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.703990 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.704043 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.704054 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.704063 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.704071 | orchestrator | 2025-04-05 12:33:27.704079 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-04-05 12:33:27.704087 | orchestrator | Saturday 05 April 2025 12:23:47 +0000 (0:00:00.699) 0:01:53.828 ******** 2025-04-05 12:33:27.704095 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.704103 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.704111 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.704118 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.704126 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.704134 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.704147 | orchestrator | 2025-04-05 12:33:27.704159 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-04-05 12:33:27.704167 | orchestrator | Saturday 05 April 2025 12:23:48 +0000 (0:00:00.901) 0:01:54.729 ******** 2025-04-05 12:33:27.704175 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-05 12:33:27.704183 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-05 12:33:27.704191 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-05 12:33:27.704199 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-05 12:33:27.704207 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-05 12:33:27.704214 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-05 12:33:27.704222 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-05 12:33:27.704230 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-05 12:33:27.704243 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-05 12:33:27.704251 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-05 12:33:27.704259 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-05 12:33:27.704266 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-05 12:33:27.704274 | orchestrator | 2025-04-05 12:33:27.704282 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-04-05 12:33:27.704290 | orchestrator | Saturday 05 April 2025 12:23:49 +0000 (0:00:01.170) 0:01:55.900 ******** 2025-04-05 12:33:27.704298 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.704307 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.704315 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.704323 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.704330 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.704338 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.704346 | orchestrator | 2025-04-05 12:33:27.704355 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-04-05 12:33:27.704363 | orchestrator | Saturday 05 April 2025 12:23:50 +0000 (0:00:01.001) 0:01:56.901 ******** 2025-04-05 12:33:27.704371 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.704379 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.704386 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.704394 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.704402 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.704410 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.704418 | orchestrator | 2025-04-05 12:33:27.704426 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-04-05 12:33:27.704434 | orchestrator | Saturday 05 April 2025 12:23:50 +0000 (0:00:00.525) 0:01:57.427 ******** 2025-04-05 12:33:27.704442 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.704450 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.704458 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.704466 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.704474 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.704481 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.704489 | orchestrator | 2025-04-05 12:33:27.704497 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-04-05 12:33:27.704505 | orchestrator | Saturday 05 April 2025 12:23:51 +0000 (0:00:00.709) 0:01:58.136 ******** 2025-04-05 12:33:27.704513 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.704522 | orchestrator | 2025-04-05 12:33:27.704530 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:quincy image] *** 2025-04-05 12:33:27.704538 | orchestrator | Saturday 05 April 2025 12:23:52 +0000 (0:00:01.053) 0:01:59.189 ******** 2025-04-05 12:33:27.704546 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.704554 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.704562 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.704570 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.704578 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.704586 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.704594 | orchestrator | 2025-04-05 12:33:27.704602 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-04-05 12:33:27.704610 | orchestrator | Saturday 05 April 2025 12:24:29 +0000 (0:00:37.281) 0:02:36.471 ******** 2025-04-05 12:33:27.704618 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-05 12:33:27.704626 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-05 12:33:27.704634 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-05 12:33:27.704646 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.704695 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-05 12:33:27.704706 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-05 12:33:27.704714 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-05 12:33:27.704722 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.704734 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-05 12:33:27.704742 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-05 12:33:27.704778 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-05 12:33:27.704786 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.704794 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-05 12:33:27.704802 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-05 12:33:27.704810 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-05 12:33:27.704818 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.704826 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-05 12:33:27.704834 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-05 12:33:27.704842 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-05 12:33:27.704850 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.704858 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-05 12:33:27.704866 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-05 12:33:27.704873 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-05 12:33:27.704881 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.704889 | orchestrator | 2025-04-05 12:33:27.704897 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-04-05 12:33:27.704905 | orchestrator | Saturday 05 April 2025 12:24:30 +0000 (0:00:00.880) 0:02:37.352 ******** 2025-04-05 12:33:27.704913 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.704921 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.704929 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.704937 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.704944 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.704952 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.704960 | orchestrator | 2025-04-05 12:33:27.704968 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-04-05 12:33:27.704976 | orchestrator | Saturday 05 April 2025 12:24:31 +0000 (0:00:00.810) 0:02:38.163 ******** 2025-04-05 12:33:27.704984 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.704992 | orchestrator | 2025-04-05 12:33:27.705000 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-04-05 12:33:27.705008 | orchestrator | Saturday 05 April 2025 12:24:31 +0000 (0:00:00.180) 0:02:38.343 ******** 2025-04-05 12:33:27.705016 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.705024 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.705032 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.705039 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.705047 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.705058 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.705070 | orchestrator | 2025-04-05 12:33:27.705098 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-04-05 12:33:27.705113 | orchestrator | Saturday 05 April 2025 12:24:32 +0000 (0:00:00.949) 0:02:39.293 ******** 2025-04-05 12:33:27.705127 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.705135 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.705149 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.705157 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.705165 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.705173 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.705181 | orchestrator | 2025-04-05 12:33:27.705189 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-04-05 12:33:27.705197 | orchestrator | Saturday 05 April 2025 12:24:33 +0000 (0:00:00.742) 0:02:40.035 ******** 2025-04-05 12:33:27.705205 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.705217 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.705225 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.705233 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.705241 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.705248 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.705256 | orchestrator | 2025-04-05 12:33:27.705264 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-04-05 12:33:27.705272 | orchestrator | Saturday 05 April 2025 12:24:34 +0000 (0:00:00.921) 0:02:40.957 ******** 2025-04-05 12:33:27.705280 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.705288 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.705296 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.705304 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.705312 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.705320 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.705328 | orchestrator | 2025-04-05 12:33:27.705337 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-04-05 12:33:27.705346 | orchestrator | Saturday 05 April 2025 12:24:36 +0000 (0:00:02.086) 0:02:43.043 ******** 2025-04-05 12:33:27.705355 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.705364 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.705373 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.705382 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.705391 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.705400 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.705409 | orchestrator | 2025-04-05 12:33:27.705418 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-04-05 12:33:27.705427 | orchestrator | Saturday 05 April 2025 12:24:37 +0000 (0:00:00.899) 0:02:43.943 ******** 2025-04-05 12:33:27.705490 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.705503 | orchestrator | 2025-04-05 12:33:27.705514 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-04-05 12:33:27.705524 | orchestrator | Saturday 05 April 2025 12:24:38 +0000 (0:00:01.252) 0:02:45.195 ******** 2025-04-05 12:33:27.705534 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.705543 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.705552 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.705561 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.705569 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.705578 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.705587 | orchestrator | 2025-04-05 12:33:27.705595 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-04-05 12:33:27.705604 | orchestrator | Saturday 05 April 2025 12:24:39 +0000 (0:00:00.717) 0:02:45.913 ******** 2025-04-05 12:33:27.705612 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.705621 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.705630 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.705638 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.705647 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.705656 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.705664 | orchestrator | 2025-04-05 12:33:27.705677 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-04-05 12:33:27.705686 | orchestrator | Saturday 05 April 2025 12:24:39 +0000 (0:00:00.760) 0:02:46.674 ******** 2025-04-05 12:33:27.705699 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.705708 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.705716 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.705725 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.705733 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.705742 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.705762 | orchestrator | 2025-04-05 12:33:27.705770 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-04-05 12:33:27.705778 | orchestrator | Saturday 05 April 2025 12:24:40 +0000 (0:00:00.680) 0:02:47.354 ******** 2025-04-05 12:33:27.705786 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.705794 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.705801 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.705809 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.705817 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.705825 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.705833 | orchestrator | 2025-04-05 12:33:27.705841 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-04-05 12:33:27.705848 | orchestrator | Saturday 05 April 2025 12:24:41 +0000 (0:00:00.882) 0:02:48.237 ******** 2025-04-05 12:33:27.705856 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.705864 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.705872 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.705880 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.705888 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.705896 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.705904 | orchestrator | 2025-04-05 12:33:27.705912 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-04-05 12:33:27.705920 | orchestrator | Saturday 05 April 2025 12:24:42 +0000 (0:00:00.602) 0:02:48.840 ******** 2025-04-05 12:33:27.705928 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.705936 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.705944 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.705952 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.705963 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.705971 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.705979 | orchestrator | 2025-04-05 12:33:27.705988 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-04-05 12:33:27.705995 | orchestrator | Saturday 05 April 2025 12:24:42 +0000 (0:00:00.682) 0:02:49.522 ******** 2025-04-05 12:33:27.706004 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.706012 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.706043 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.706051 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.706059 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.706067 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.706075 | orchestrator | 2025-04-05 12:33:27.706083 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-04-05 12:33:27.706091 | orchestrator | Saturday 05 April 2025 12:24:43 +0000 (0:00:00.731) 0:02:50.254 ******** 2025-04-05 12:33:27.706099 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.706107 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.706115 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.706123 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.706131 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.706139 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.706147 | orchestrator | 2025-04-05 12:33:27.706155 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-05 12:33:27.706163 | orchestrator | Saturday 05 April 2025 12:24:44 +0000 (0:00:01.258) 0:02:51.512 ******** 2025-04-05 12:33:27.706171 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.706184 | orchestrator | 2025-04-05 12:33:27.706192 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-04-05 12:33:27.706200 | orchestrator | Saturday 05 April 2025 12:24:46 +0000 (0:00:01.251) 0:02:52.764 ******** 2025-04-05 12:33:27.706208 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-04-05 12:33:27.706216 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-04-05 12:33:27.706224 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-04-05 12:33:27.706232 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-04-05 12:33:27.706240 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-04-05 12:33:27.706248 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-04-05 12:33:27.706301 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-04-05 12:33:27.706314 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-04-05 12:33:27.706323 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-04-05 12:33:27.706332 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-04-05 12:33:27.706340 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-04-05 12:33:27.706349 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-04-05 12:33:27.706358 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-04-05 12:33:27.706366 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-04-05 12:33:27.706375 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-04-05 12:33:27.706383 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-04-05 12:33:27.706410 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-04-05 12:33:27.706419 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-04-05 12:33:27.706428 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-04-05 12:33:27.706437 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-04-05 12:33:27.706445 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-04-05 12:33:27.706454 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-04-05 12:33:27.706462 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-04-05 12:33:27.706471 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-04-05 12:33:27.706479 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-04-05 12:33:27.706488 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-04-05 12:33:27.706496 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-04-05 12:33:27.706504 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-04-05 12:33:27.706513 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-04-05 12:33:27.706527 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-04-05 12:33:27.706536 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-04-05 12:33:27.706544 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-04-05 12:33:27.706553 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-04-05 12:33:27.706561 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-04-05 12:33:27.706570 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-04-05 12:33:27.706578 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-04-05 12:33:27.706586 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-04-05 12:33:27.706595 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-04-05 12:33:27.706603 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-04-05 12:33:27.706612 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-04-05 12:33:27.706620 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-05 12:33:27.706634 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-05 12:33:27.706642 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-04-05 12:33:27.706650 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-05 12:33:27.706659 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-04-05 12:33:27.706667 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-05 12:33:27.706676 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-05 12:33:27.706684 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-05 12:33:27.706719 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-05 12:33:27.706728 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-05 12:33:27.706735 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-05 12:33:27.706779 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-05 12:33:27.706789 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-05 12:33:27.706798 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-05 12:33:27.706806 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-05 12:33:27.706814 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-05 12:33:27.706822 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-05 12:33:27.706830 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-05 12:33:27.706838 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-05 12:33:27.706846 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-05 12:33:27.706854 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-05 12:33:27.706862 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-05 12:33:27.706870 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-05 12:33:27.706878 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-05 12:33:27.706885 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-05 12:33:27.706893 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-05 12:33:27.706949 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-05 12:33:27.706960 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-05 12:33:27.706968 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-05 12:33:27.706975 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-05 12:33:27.706982 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-05 12:33:27.706989 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-05 12:33:27.706997 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-05 12:33:27.707004 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-05 12:33:27.707011 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-05 12:33:27.707018 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-04-05 12:33:27.707026 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-04-05 12:33:27.707033 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-04-05 12:33:27.707041 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-05 12:33:27.707048 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-05 12:33:27.707061 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-04-05 12:33:27.707068 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-04-05 12:33:27.707076 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-04-05 12:33:27.707083 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-04-05 12:33:27.707090 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-04-05 12:33:27.707098 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-05 12:33:27.707105 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-04-05 12:33:27.707113 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-04-05 12:33:27.707120 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-04-05 12:33:27.707127 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-04-05 12:33:27.707135 | orchestrator | 2025-04-05 12:33:27.707142 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-05 12:33:27.707149 | orchestrator | Saturday 05 April 2025 12:24:52 +0000 (0:00:06.784) 0:02:59.549 ******** 2025-04-05 12:33:27.707157 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.707164 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.707172 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.707179 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.707187 | orchestrator | 2025-04-05 12:33:27.707195 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-04-05 12:33:27.707202 | orchestrator | Saturday 05 April 2025 12:24:53 +0000 (0:00:01.011) 0:03:00.560 ******** 2025-04-05 12:33:27.707210 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-04-05 12:33:27.707217 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-04-05 12:33:27.707225 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-04-05 12:33:27.707232 | orchestrator | 2025-04-05 12:33:27.707240 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-04-05 12:33:27.707247 | orchestrator | Saturday 05 April 2025 12:24:54 +0000 (0:00:00.707) 0:03:01.268 ******** 2025-04-05 12:33:27.707254 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-04-05 12:33:27.707262 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-04-05 12:33:27.707269 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-04-05 12:33:27.707277 | orchestrator | 2025-04-05 12:33:27.707284 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-05 12:33:27.707291 | orchestrator | Saturday 05 April 2025 12:24:55 +0000 (0:00:01.199) 0:03:02.467 ******** 2025-04-05 12:33:27.707298 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.707306 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.707313 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.707321 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.707328 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.707335 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.707342 | orchestrator | 2025-04-05 12:33:27.707350 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-05 12:33:27.707360 | orchestrator | Saturday 05 April 2025 12:24:56 +0000 (0:00:00.768) 0:03:03.235 ******** 2025-04-05 12:33:27.707367 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.707375 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.707382 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.707394 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.707401 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.707408 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.707416 | orchestrator | 2025-04-05 12:33:27.707423 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-05 12:33:27.707467 | orchestrator | Saturday 05 April 2025 12:24:57 +0000 (0:00:00.692) 0:03:03.928 ******** 2025-04-05 12:33:27.707477 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.707485 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.707493 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.707500 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.707508 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.707515 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.707523 | orchestrator | 2025-04-05 12:33:27.707531 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-05 12:33:27.707538 | orchestrator | Saturday 05 April 2025 12:24:57 +0000 (0:00:00.685) 0:03:04.614 ******** 2025-04-05 12:33:27.707546 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.707553 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.707561 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.707568 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.707576 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.707583 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.707591 | orchestrator | 2025-04-05 12:33:27.707598 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-05 12:33:27.707606 | orchestrator | Saturday 05 April 2025 12:24:58 +0000 (0:00:00.529) 0:03:05.144 ******** 2025-04-05 12:33:27.707613 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.707621 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.707628 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.707636 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.707643 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.707651 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.707658 | orchestrator | 2025-04-05 12:33:27.707666 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-05 12:33:27.707674 | orchestrator | Saturday 05 April 2025 12:24:59 +0000 (0:00:00.724) 0:03:05.868 ******** 2025-04-05 12:33:27.707681 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.707689 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.707696 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.707704 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.707715 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.707722 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.707730 | orchestrator | 2025-04-05 12:33:27.707738 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-05 12:33:27.707757 | orchestrator | Saturday 05 April 2025 12:24:59 +0000 (0:00:00.559) 0:03:06.428 ******** 2025-04-05 12:33:27.707765 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.707772 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.707779 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.707786 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.707793 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.707811 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.707818 | orchestrator | 2025-04-05 12:33:27.707825 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-05 12:33:27.707832 | orchestrator | Saturday 05 April 2025 12:25:00 +0000 (0:00:00.775) 0:03:07.203 ******** 2025-04-05 12:33:27.707843 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.707850 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.707857 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.707864 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.707871 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.707882 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.707889 | orchestrator | 2025-04-05 12:33:27.707896 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-05 12:33:27.707903 | orchestrator | Saturday 05 April 2025 12:25:01 +0000 (0:00:00.540) 0:03:07.743 ******** 2025-04-05 12:33:27.707910 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.707917 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.707924 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.707931 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.707938 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.707945 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.707952 | orchestrator | 2025-04-05 12:33:27.707959 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-05 12:33:27.707967 | orchestrator | Saturday 05 April 2025 12:25:02 +0000 (0:00:01.457) 0:03:09.200 ******** 2025-04-05 12:33:27.707974 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.707980 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.707987 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.707994 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.708001 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.708008 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.708015 | orchestrator | 2025-04-05 12:33:27.708022 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-05 12:33:27.708029 | orchestrator | Saturday 05 April 2025 12:25:03 +0000 (0:00:00.565) 0:03:09.766 ******** 2025-04-05 12:33:27.708036 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-05 12:33:27.708043 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-05 12:33:27.708049 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.708056 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-05 12:33:27.708063 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-05 12:33:27.708070 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.708077 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-05 12:33:27.708084 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-05 12:33:27.708091 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.708098 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-05 12:33:27.708105 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-05 12:33:27.708113 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.708120 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-05 12:33:27.708128 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-05 12:33:27.708136 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.708144 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-05 12:33:27.708151 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-05 12:33:27.708160 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.708168 | orchestrator | 2025-04-05 12:33:27.708216 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-05 12:33:27.708227 | orchestrator | Saturday 05 April 2025 12:25:03 +0000 (0:00:00.772) 0:03:10.538 ******** 2025-04-05 12:33:27.708235 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-04-05 12:33:27.708243 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-04-05 12:33:27.708250 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-04-05 12:33:27.708261 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-04-05 12:33:27.708268 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-04-05 12:33:27.708275 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-04-05 12:33:27.708282 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-05 12:33:27.708289 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-05 12:33:27.708296 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.708303 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-05 12:33:27.708314 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-05 12:33:27.708321 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.708328 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-05 12:33:27.708335 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-05 12:33:27.708342 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.708349 | orchestrator | 2025-04-05 12:33:27.708356 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-05 12:33:27.708363 | orchestrator | Saturday 05 April 2025 12:25:04 +0000 (0:00:00.586) 0:03:11.124 ******** 2025-04-05 12:33:27.708370 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.708377 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.708385 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.708391 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.708398 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.708405 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.708412 | orchestrator | 2025-04-05 12:33:27.708419 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-05 12:33:27.708426 | orchestrator | Saturday 05 April 2025 12:25:05 +0000 (0:00:00.729) 0:03:11.854 ******** 2025-04-05 12:33:27.708433 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.708440 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.708447 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.708454 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.708461 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.708468 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.708474 | orchestrator | 2025-04-05 12:33:27.708482 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-05 12:33:27.708489 | orchestrator | Saturday 05 April 2025 12:25:05 +0000 (0:00:00.543) 0:03:12.398 ******** 2025-04-05 12:33:27.708496 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.708502 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.708509 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.708516 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.708523 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.708534 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.708542 | orchestrator | 2025-04-05 12:33:27.708549 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-05 12:33:27.708556 | orchestrator | Saturday 05 April 2025 12:25:06 +0000 (0:00:00.694) 0:03:13.093 ******** 2025-04-05 12:33:27.708563 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.708570 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.708577 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.708584 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.708591 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.708598 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.708605 | orchestrator | 2025-04-05 12:33:27.708612 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-05 12:33:27.708619 | orchestrator | Saturday 05 April 2025 12:25:07 +0000 (0:00:00.707) 0:03:13.800 ******** 2025-04-05 12:33:27.708626 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.708633 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.708640 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.708647 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.708653 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.708660 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.708667 | orchestrator | 2025-04-05 12:33:27.708674 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-05 12:33:27.708681 | orchestrator | Saturday 05 April 2025 12:25:07 +0000 (0:00:00.786) 0:03:14.586 ******** 2025-04-05 12:33:27.708688 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.708695 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.708706 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.708713 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.708720 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.708727 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.708734 | orchestrator | 2025-04-05 12:33:27.708741 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-05 12:33:27.708760 | orchestrator | Saturday 05 April 2025 12:25:08 +0000 (0:00:00.756) 0:03:15.343 ******** 2025-04-05 12:33:27.708768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.708775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.708782 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.708788 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.708796 | orchestrator | 2025-04-05 12:33:27.708803 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-05 12:33:27.708810 | orchestrator | Saturday 05 April 2025 12:25:09 +0000 (0:00:00.374) 0:03:15.718 ******** 2025-04-05 12:33:27.708816 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.708827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.708872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.708882 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.708889 | orchestrator | 2025-04-05 12:33:27.708897 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-05 12:33:27.708904 | orchestrator | Saturday 05 April 2025 12:25:09 +0000 (0:00:00.389) 0:03:16.107 ******** 2025-04-05 12:33:27.708911 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.708918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.708925 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.708932 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.708939 | orchestrator | 2025-04-05 12:33:27.708946 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:33:27.708953 | orchestrator | Saturday 05 April 2025 12:25:09 +0000 (0:00:00.525) 0:03:16.633 ******** 2025-04-05 12:33:27.708960 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.708967 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.708974 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.708981 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.708988 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.708994 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.709001 | orchestrator | 2025-04-05 12:33:27.709008 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-05 12:33:27.709016 | orchestrator | Saturday 05 April 2025 12:25:10 +0000 (0:00:00.826) 0:03:17.460 ******** 2025-04-05 12:33:27.709023 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-05 12:33:27.709030 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-04-05 12:33:27.709036 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-05 12:33:27.709044 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-04-05 12:33:27.709050 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.709057 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-05 12:33:27.709064 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.709071 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-05 12:33:27.709078 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.709085 | orchestrator | 2025-04-05 12:33:27.709092 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-05 12:33:27.709099 | orchestrator | Saturday 05 April 2025 12:25:11 +0000 (0:00:00.913) 0:03:18.374 ******** 2025-04-05 12:33:27.709106 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.709114 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.709121 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.709132 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.709139 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.709146 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.709153 | orchestrator | 2025-04-05 12:33:27.709160 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:33:27.709167 | orchestrator | Saturday 05 April 2025 12:25:12 +0000 (0:00:01.005) 0:03:19.379 ******** 2025-04-05 12:33:27.709174 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.709181 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.709188 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.709195 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.709202 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.709209 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.709216 | orchestrator | 2025-04-05 12:33:27.709223 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-05 12:33:27.709230 | orchestrator | Saturday 05 April 2025 12:25:13 +0000 (0:00:00.667) 0:03:20.047 ******** 2025-04-05 12:33:27.709237 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-05 12:33:27.709244 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-05 12:33:27.709251 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.709258 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-05 12:33:27.709265 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.709272 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-05 12:33:27.709288 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.709295 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.709308 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-05 12:33:27.709315 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.709322 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-05 12:33:27.709329 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.709336 | orchestrator | 2025-04-05 12:33:27.709343 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-05 12:33:27.709350 | orchestrator | Saturday 05 April 2025 12:25:15 +0000 (0:00:01.697) 0:03:21.744 ******** 2025-04-05 12:33:27.709357 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.709364 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.709371 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.709378 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.709385 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.709392 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.709398 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.709405 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.709412 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.709419 | orchestrator | 2025-04-05 12:33:27.709426 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-05 12:33:27.709433 | orchestrator | Saturday 05 April 2025 12:25:15 +0000 (0:00:00.585) 0:03:22.330 ******** 2025-04-05 12:33:27.709440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.709447 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-05 12:33:27.709454 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-05 12:33:27.709497 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-05 12:33:27.709507 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-05 12:33:27.709514 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-05 12:33:27.709521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.709528 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-05 12:33:27.709541 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.709548 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-05 12:33:27.709555 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.709562 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-05 12:33:27.709569 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-05 12:33:27.709576 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-05 12:33:27.709583 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-05 12:33:27.709590 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.709597 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.709604 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.709611 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-05 12:33:27.709618 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.709629 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-05 12:33:27.709636 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-05 12:33:27.709643 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-05 12:33:27.709650 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.709657 | orchestrator | 2025-04-05 12:33:27.709664 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-05 12:33:27.709671 | orchestrator | Saturday 05 April 2025 12:25:17 +0000 (0:00:01.717) 0:03:24.047 ******** 2025-04-05 12:33:27.709678 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.709685 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.709692 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.709699 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.709706 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.709713 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.709720 | orchestrator | 2025-04-05 12:33:27.709727 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-05 12:33:27.709734 | orchestrator | Saturday 05 April 2025 12:25:21 +0000 (0:00:04.239) 0:03:28.287 ******** 2025-04-05 12:33:27.709741 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.709758 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.709766 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.709773 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.709780 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.709787 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.709794 | orchestrator | 2025-04-05 12:33:27.709801 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-04-05 12:33:27.709808 | orchestrator | Saturday 05 April 2025 12:25:22 +0000 (0:00:00.906) 0:03:29.194 ******** 2025-04-05 12:33:27.709815 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.709822 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.709829 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.709836 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.709843 | orchestrator | 2025-04-05 12:33:27.709850 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-04-05 12:33:27.709857 | orchestrator | Saturday 05 April 2025 12:25:23 +0000 (0:00:00.862) 0:03:30.056 ******** 2025-04-05 12:33:27.709864 | orchestrator | 2025-04-05 12:33:27.709871 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-04-05 12:33:27.709878 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.709885 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.709892 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.709900 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.709907 | orchestrator | 2025-04-05 12:33:27.709914 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-04-05 12:33:27.709925 | orchestrator | Saturday 05 April 2025 12:25:24 +0000 (0:00:00.955) 0:03:31.011 ******** 2025-04-05 12:33:27.709932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.709939 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.709946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.709953 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.709960 | orchestrator | 2025-04-05 12:33:27.709967 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-04-05 12:33:27.709974 | orchestrator | Saturday 05 April 2025 12:25:24 +0000 (0:00:00.399) 0:03:31.411 ******** 2025-04-05 12:33:27.709981 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.709988 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.709995 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.710002 | orchestrator | 2025-04-05 12:33:27.710009 | orchestrator | TASK [ceph-handler : set _osd_handler_called before restart] ******************* 2025-04-05 12:33:27.710033 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-05 12:33:27.710041 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-05 12:33:27.710048 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-05 12:33:27.710055 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.710062 | orchestrator | 2025-04-05 12:33:27.710069 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-04-05 12:33:27.710076 | orchestrator | Saturday 05 April 2025 12:25:25 +0000 (0:00:00.874) 0:03:32.286 ******** 2025-04-05 12:33:27.710083 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.710090 | orchestrator | 2025-04-05 12:33:27.710100 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-04-05 12:33:27.710145 | orchestrator | Saturday 05 April 2025 12:25:25 +0000 (0:00:00.254) 0:03:32.541 ******** 2025-04-05 12:33:27.710155 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.710162 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.710169 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.710176 | orchestrator | 2025-04-05 12:33:27.710183 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-04-05 12:33:27.710190 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.710197 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.710204 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.710215 | orchestrator | 2025-04-05 12:33:27.710222 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-04-05 12:33:27.710229 | orchestrator | Saturday 05 April 2025 12:25:26 +0000 (0:00:00.944) 0:03:33.485 ******** 2025-04-05 12:33:27.710236 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.710243 | orchestrator | 2025-04-05 12:33:27.710250 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-04-05 12:33:27.710257 | orchestrator | Saturday 05 April 2025 12:25:27 +0000 (0:00:00.221) 0:03:33.707 ******** 2025-04-05 12:33:27.710264 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.710271 | orchestrator | 2025-04-05 12:33:27.710278 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-04-05 12:33:27.710285 | orchestrator | Saturday 05 April 2025 12:25:27 +0000 (0:00:00.219) 0:03:33.926 ******** 2025-04-05 12:33:27.710292 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.710299 | orchestrator | 2025-04-05 12:33:27.710306 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-04-05 12:33:27.710313 | orchestrator | Saturday 05 April 2025 12:25:27 +0000 (0:00:00.110) 0:03:34.037 ******** 2025-04-05 12:33:27.710320 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.710327 | orchestrator | 2025-04-05 12:33:27.710334 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-04-05 12:33:27.710341 | orchestrator | Saturday 05 April 2025 12:25:27 +0000 (0:00:00.231) 0:03:34.268 ******** 2025-04-05 12:33:27.710348 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.710360 | orchestrator | 2025-04-05 12:33:27.710367 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-04-05 12:33:27.710374 | orchestrator | Saturday 05 April 2025 12:25:27 +0000 (0:00:00.221) 0:03:34.490 ******** 2025-04-05 12:33:27.710381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.710388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.710395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.710402 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.710409 | orchestrator | 2025-04-05 12:33:27.710416 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-04-05 12:33:27.710423 | orchestrator | Saturday 05 April 2025 12:25:28 +0000 (0:00:00.410) 0:03:34.900 ******** 2025-04-05 12:33:27.710430 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.710440 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.710447 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.710454 | orchestrator | 2025-04-05 12:33:27.710462 | orchestrator | TASK [ceph-handler : set _osd_handler_called after restart] ******************** 2025-04-05 12:33:27.710469 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.710486 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.710493 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.710500 | orchestrator | 2025-04-05 12:33:27.710507 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-04-05 12:33:27.710514 | orchestrator | Saturday 05 April 2025 12:25:29 +0000 (0:00:00.988) 0:03:35.888 ******** 2025-04-05 12:33:27.710521 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.710528 | orchestrator | 2025-04-05 12:33:27.710535 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-04-05 12:33:27.710542 | orchestrator | Saturday 05 April 2025 12:25:29 +0000 (0:00:00.265) 0:03:36.153 ******** 2025-04-05 12:33:27.710549 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.710556 | orchestrator | 2025-04-05 12:33:27.710563 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-04-05 12:33:27.710570 | orchestrator | Saturday 05 April 2025 12:25:29 +0000 (0:00:00.212) 0:03:36.366 ******** 2025-04-05 12:33:27.710578 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.710585 | orchestrator | 2025-04-05 12:33:27.710592 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-04-05 12:33:27.710599 | orchestrator | Saturday 05 April 2025 12:25:30 +0000 (0:00:00.883) 0:03:37.250 ******** 2025-04-05 12:33:27.710606 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.710613 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.710620 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.710626 | orchestrator | 2025-04-05 12:33:27.710633 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-04-05 12:33:27.710640 | orchestrator | Saturday 05 April 2025 12:25:31 +0000 (0:00:00.988) 0:03:38.238 ******** 2025-04-05 12:33:27.710647 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.710654 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.710661 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.710668 | orchestrator | 2025-04-05 12:33:27.710675 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-04-05 12:33:27.710682 | orchestrator | Saturday 05 April 2025 12:25:32 +0000 (0:00:00.486) 0:03:38.725 ******** 2025-04-05 12:33:27.710689 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.710696 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.710703 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.710710 | orchestrator | 2025-04-05 12:33:27.710717 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-04-05 12:33:27.710724 | orchestrator | Saturday 05 April 2025 12:25:32 +0000 (0:00:00.668) 0:03:39.394 ******** 2025-04-05 12:33:27.710730 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.710741 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.710780 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.710789 | orchestrator | 2025-04-05 12:33:27.710800 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-04-05 12:33:27.710850 | orchestrator | Saturday 05 April 2025 12:25:33 +0000 (0:00:00.597) 0:03:39.991 ******** 2025-04-05 12:33:27.710861 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.710870 | orchestrator | 2025-04-05 12:33:27.710878 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-04-05 12:33:27.710887 | orchestrator | Saturday 05 April 2025 12:25:34 +0000 (0:00:00.820) 0:03:40.812 ******** 2025-04-05 12:33:27.710895 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.710904 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.710912 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.710921 | orchestrator | 2025-04-05 12:33:27.710929 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-04-05 12:33:27.710937 | orchestrator | Saturday 05 April 2025 12:25:34 +0000 (0:00:00.555) 0:03:41.367 ******** 2025-04-05 12:33:27.710945 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.710954 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.710962 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.710970 | orchestrator | 2025-04-05 12:33:27.710978 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-04-05 12:33:27.710986 | orchestrator | Saturday 05 April 2025 12:25:35 +0000 (0:00:01.225) 0:03:42.593 ******** 2025-04-05 12:33:27.710994 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-05 12:33:27.711003 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-05 12:33:27.711011 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-05 12:33:27.711019 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.711028 | orchestrator | 2025-04-05 12:33:27.711036 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-04-05 12:33:27.711044 | orchestrator | Saturday 05 April 2025 12:25:36 +0000 (0:00:00.862) 0:03:43.456 ******** 2025-04-05 12:33:27.711052 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.711060 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.711069 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.711077 | orchestrator | 2025-04-05 12:33:27.711085 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-05 12:33:27.711093 | orchestrator | Saturday 05 April 2025 12:25:37 +0000 (0:00:00.753) 0:03:44.209 ******** 2025-04-05 12:33:27.711102 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.711110 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.711118 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.711126 | orchestrator | 2025-04-05 12:33:27.711134 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-04-05 12:33:27.711143 | orchestrator | Saturday 05 April 2025 12:25:37 +0000 (0:00:00.394) 0:03:44.603 ******** 2025-04-05 12:33:27.711151 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.711158 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.711165 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.711173 | orchestrator | 2025-04-05 12:33:27.711181 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-04-05 12:33:27.711188 | orchestrator | Saturday 05 April 2025 12:25:39 +0000 (0:00:01.225) 0:03:45.829 ******** 2025-04-05 12:33:27.711196 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.711203 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.711211 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.711218 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.711226 | orchestrator | 2025-04-05 12:33:27.711233 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-04-05 12:33:27.711246 | orchestrator | Saturday 05 April 2025 12:25:39 +0000 (0:00:00.713) 0:03:46.542 ******** 2025-04-05 12:33:27.711254 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.711261 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.711269 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.711276 | orchestrator | 2025-04-05 12:33:27.711284 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-04-05 12:33:27.711291 | orchestrator | Saturday 05 April 2025 12:25:40 +0000 (0:00:00.422) 0:03:46.965 ******** 2025-04-05 12:33:27.711299 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.711306 | orchestrator | 2025-04-05 12:33:27.711313 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-04-05 12:33:27.711319 | orchestrator | Saturday 05 April 2025 12:25:40 +0000 (0:00:00.535) 0:03:47.501 ******** 2025-04-05 12:33:27.711326 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.711333 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.711339 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.711346 | orchestrator | 2025-04-05 12:33:27.711353 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-04-05 12:33:27.711359 | orchestrator | Saturday 05 April 2025 12:25:41 +0000 (0:00:00.682) 0:03:48.184 ******** 2025-04-05 12:33:27.711366 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.711372 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.711379 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.711386 | orchestrator | 2025-04-05 12:33:27.711392 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-04-05 12:33:27.711399 | orchestrator | Saturday 05 April 2025 12:25:42 +0000 (0:00:01.465) 0:03:49.650 ******** 2025-04-05 12:33:27.711408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.711418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.711428 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.711438 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.711448 | orchestrator | 2025-04-05 12:33:27.711457 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-04-05 12:33:27.711468 | orchestrator | Saturday 05 April 2025 12:25:43 +0000 (0:00:00.510) 0:03:50.161 ******** 2025-04-05 12:33:27.711476 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.711482 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.711488 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.711494 | orchestrator | 2025-04-05 12:33:27.711539 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-04-05 12:33:27.711549 | orchestrator | Saturday 05 April 2025 12:25:43 +0000 (0:00:00.412) 0:03:50.574 ******** 2025-04-05 12:33:27.711555 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.711561 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.711571 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.711577 | orchestrator | 2025-04-05 12:33:27.711584 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-04-05 12:33:27.711593 | orchestrator | Saturday 05 April 2025 12:25:44 +0000 (0:00:00.322) 0:03:50.896 ******** 2025-04-05 12:33:27.711599 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.711605 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.711612 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.711618 | orchestrator | 2025-04-05 12:33:27.711624 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-04-05 12:33:27.711630 | orchestrator | Saturday 05 April 2025 12:25:44 +0000 (0:00:00.567) 0:03:51.464 ******** 2025-04-05 12:33:27.711636 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.711643 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.711649 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.711655 | orchestrator | 2025-04-05 12:33:27.711661 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-05 12:33:27.711672 | orchestrator | Saturday 05 April 2025 12:25:45 +0000 (0:00:00.357) 0:03:51.821 ******** 2025-04-05 12:33:27.711678 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.711684 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.711690 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.711697 | orchestrator | 2025-04-05 12:33:27.711703 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-04-05 12:33:27.711709 | orchestrator | 2025-04-05 12:33:27.711715 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-05 12:33:27.711721 | orchestrator | Saturday 05 April 2025 12:25:47 +0000 (0:00:02.440) 0:03:54.262 ******** 2025-04-05 12:33:27.711728 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.711734 | orchestrator | 2025-04-05 12:33:27.711740 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-05 12:33:27.711760 | orchestrator | Saturday 05 April 2025 12:25:48 +0000 (0:00:00.666) 0:03:54.928 ******** 2025-04-05 12:33:27.711766 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.711773 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.711779 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.711785 | orchestrator | 2025-04-05 12:33:27.711792 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-05 12:33:27.711798 | orchestrator | Saturday 05 April 2025 12:25:49 +0000 (0:00:00.827) 0:03:55.756 ******** 2025-04-05 12:33:27.711804 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.711810 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.711816 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.711823 | orchestrator | 2025-04-05 12:33:27.711829 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-05 12:33:27.711835 | orchestrator | Saturday 05 April 2025 12:25:49 +0000 (0:00:00.452) 0:03:56.208 ******** 2025-04-05 12:33:27.711841 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.711847 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.711854 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.711860 | orchestrator | 2025-04-05 12:33:27.711866 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-05 12:33:27.711883 | orchestrator | Saturday 05 April 2025 12:25:50 +0000 (0:00:00.620) 0:03:56.829 ******** 2025-04-05 12:33:27.711890 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.711896 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.711903 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.711909 | orchestrator | 2025-04-05 12:33:27.711916 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-05 12:33:27.711922 | orchestrator | Saturday 05 April 2025 12:25:50 +0000 (0:00:00.341) 0:03:57.171 ******** 2025-04-05 12:33:27.711929 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.711935 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.711942 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.711948 | orchestrator | 2025-04-05 12:33:27.711955 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-05 12:33:27.711961 | orchestrator | Saturday 05 April 2025 12:25:51 +0000 (0:00:00.678) 0:03:57.849 ******** 2025-04-05 12:33:27.711968 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.711974 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.711981 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.711987 | orchestrator | 2025-04-05 12:33:27.711993 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-05 12:33:27.712000 | orchestrator | Saturday 05 April 2025 12:25:51 +0000 (0:00:00.298) 0:03:58.148 ******** 2025-04-05 12:33:27.712006 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712013 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712019 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712025 | orchestrator | 2025-04-05 12:33:27.712032 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-05 12:33:27.712042 | orchestrator | Saturday 05 April 2025 12:25:51 +0000 (0:00:00.465) 0:03:58.613 ******** 2025-04-05 12:33:27.712049 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712055 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712061 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712068 | orchestrator | 2025-04-05 12:33:27.712074 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-05 12:33:27.712081 | orchestrator | Saturday 05 April 2025 12:25:52 +0000 (0:00:00.273) 0:03:58.886 ******** 2025-04-05 12:33:27.712087 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712094 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712100 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712107 | orchestrator | 2025-04-05 12:33:27.712113 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-05 12:33:27.712120 | orchestrator | Saturday 05 April 2025 12:25:52 +0000 (0:00:00.292) 0:03:59.178 ******** 2025-04-05 12:33:27.712126 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712167 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712176 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712182 | orchestrator | 2025-04-05 12:33:27.712188 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-05 12:33:27.712195 | orchestrator | Saturday 05 April 2025 12:25:52 +0000 (0:00:00.267) 0:03:59.446 ******** 2025-04-05 12:33:27.712201 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.712207 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.712213 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.712219 | orchestrator | 2025-04-05 12:33:27.712226 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-05 12:33:27.712232 | orchestrator | Saturday 05 April 2025 12:25:53 +0000 (0:00:00.783) 0:04:00.229 ******** 2025-04-05 12:33:27.712238 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712244 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712251 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712257 | orchestrator | 2025-04-05 12:33:27.712269 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-05 12:33:27.712275 | orchestrator | Saturday 05 April 2025 12:25:53 +0000 (0:00:00.270) 0:04:00.500 ******** 2025-04-05 12:33:27.712281 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.712288 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.712294 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.712300 | orchestrator | 2025-04-05 12:33:27.712306 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-05 12:33:27.712312 | orchestrator | Saturday 05 April 2025 12:25:54 +0000 (0:00:00.321) 0:04:00.822 ******** 2025-04-05 12:33:27.712319 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712325 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712331 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712341 | orchestrator | 2025-04-05 12:33:27.712347 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-05 12:33:27.712354 | orchestrator | Saturday 05 April 2025 12:25:54 +0000 (0:00:00.289) 0:04:01.112 ******** 2025-04-05 12:33:27.712360 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712367 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712373 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712379 | orchestrator | 2025-04-05 12:33:27.712385 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-05 12:33:27.712392 | orchestrator | Saturday 05 April 2025 12:25:54 +0000 (0:00:00.418) 0:04:01.530 ******** 2025-04-05 12:33:27.712398 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712404 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712410 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712416 | orchestrator | 2025-04-05 12:33:27.712423 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-05 12:33:27.712429 | orchestrator | Saturday 05 April 2025 12:25:55 +0000 (0:00:00.274) 0:04:01.805 ******** 2025-04-05 12:33:27.712439 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712445 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712451 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712458 | orchestrator | 2025-04-05 12:33:27.712464 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-05 12:33:27.712470 | orchestrator | Saturday 05 April 2025 12:25:55 +0000 (0:00:00.256) 0:04:02.061 ******** 2025-04-05 12:33:27.712476 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712483 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712489 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712495 | orchestrator | 2025-04-05 12:33:27.712501 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-05 12:33:27.712507 | orchestrator | Saturday 05 April 2025 12:25:55 +0000 (0:00:00.237) 0:04:02.298 ******** 2025-04-05 12:33:27.712514 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.712520 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.712526 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.712532 | orchestrator | 2025-04-05 12:33:27.712538 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-05 12:33:27.712545 | orchestrator | Saturday 05 April 2025 12:25:56 +0000 (0:00:00.408) 0:04:02.707 ******** 2025-04-05 12:33:27.712551 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.712557 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.712563 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.712569 | orchestrator | 2025-04-05 12:33:27.712576 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-05 12:33:27.712582 | orchestrator | Saturday 05 April 2025 12:25:56 +0000 (0:00:00.277) 0:04:02.984 ******** 2025-04-05 12:33:27.712588 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712594 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712600 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712606 | orchestrator | 2025-04-05 12:33:27.712613 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-05 12:33:27.712619 | orchestrator | Saturday 05 April 2025 12:25:56 +0000 (0:00:00.299) 0:04:03.284 ******** 2025-04-05 12:33:27.712625 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712631 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712637 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712643 | orchestrator | 2025-04-05 12:33:27.712650 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-05 12:33:27.712656 | orchestrator | Saturday 05 April 2025 12:25:56 +0000 (0:00:00.293) 0:04:03.578 ******** 2025-04-05 12:33:27.712662 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712668 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712674 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712680 | orchestrator | 2025-04-05 12:33:27.712686 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-05 12:33:27.712692 | orchestrator | Saturday 05 April 2025 12:25:57 +0000 (0:00:00.392) 0:04:03.970 ******** 2025-04-05 12:33:27.712698 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712704 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712711 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712717 | orchestrator | 2025-04-05 12:33:27.712723 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-05 12:33:27.712729 | orchestrator | Saturday 05 April 2025 12:25:57 +0000 (0:00:00.266) 0:04:04.236 ******** 2025-04-05 12:33:27.712735 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712786 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712795 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712802 | orchestrator | 2025-04-05 12:33:27.712808 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-05 12:33:27.712815 | orchestrator | Saturday 05 April 2025 12:25:57 +0000 (0:00:00.297) 0:04:04.534 ******** 2025-04-05 12:33:27.712821 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712831 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712837 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712844 | orchestrator | 2025-04-05 12:33:27.712850 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-05 12:33:27.712856 | orchestrator | Saturday 05 April 2025 12:25:58 +0000 (0:00:00.289) 0:04:04.823 ******** 2025-04-05 12:33:27.712862 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712868 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712874 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712880 | orchestrator | 2025-04-05 12:33:27.712887 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-05 12:33:27.712893 | orchestrator | Saturday 05 April 2025 12:25:58 +0000 (0:00:00.439) 0:04:05.263 ******** 2025-04-05 12:33:27.712899 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712905 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712911 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712917 | orchestrator | 2025-04-05 12:33:27.712924 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-05 12:33:27.712930 | orchestrator | Saturday 05 April 2025 12:25:58 +0000 (0:00:00.304) 0:04:05.567 ******** 2025-04-05 12:33:27.712936 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712942 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712949 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712955 | orchestrator | 2025-04-05 12:33:27.712961 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-05 12:33:27.712970 | orchestrator | Saturday 05 April 2025 12:25:59 +0000 (0:00:00.332) 0:04:05.899 ******** 2025-04-05 12:33:27.712977 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.712983 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.712989 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.712995 | orchestrator | 2025-04-05 12:33:27.713001 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-05 12:33:27.713008 | orchestrator | Saturday 05 April 2025 12:25:59 +0000 (0:00:00.294) 0:04:06.194 ******** 2025-04-05 12:33:27.713014 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.713020 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.713029 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.713035 | orchestrator | 2025-04-05 12:33:27.713042 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-05 12:33:27.713048 | orchestrator | Saturday 05 April 2025 12:25:59 +0000 (0:00:00.478) 0:04:06.673 ******** 2025-04-05 12:33:27.713054 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.713060 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.713066 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.713072 | orchestrator | 2025-04-05 12:33:27.713078 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-05 12:33:27.713085 | orchestrator | Saturday 05 April 2025 12:26:00 +0000 (0:00:00.326) 0:04:06.999 ******** 2025-04-05 12:33:27.713091 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-05 12:33:27.713097 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-05 12:33:27.713103 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.713110 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-05 12:33:27.713116 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-05 12:33:27.713122 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.713128 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-05 12:33:27.713135 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-05 12:33:27.713141 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.713147 | orchestrator | 2025-04-05 12:33:27.713153 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-05 12:33:27.713159 | orchestrator | Saturday 05 April 2025 12:26:00 +0000 (0:00:00.400) 0:04:07.400 ******** 2025-04-05 12:33:27.713170 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-05 12:33:27.713176 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-05 12:33:27.713182 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.713188 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-05 12:33:27.713195 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-05 12:33:27.713201 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.713207 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-05 12:33:27.713224 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-05 12:33:27.713231 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.713237 | orchestrator | 2025-04-05 12:33:27.713244 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-05 12:33:27.713251 | orchestrator | Saturday 05 April 2025 12:26:01 +0000 (0:00:00.318) 0:04:07.719 ******** 2025-04-05 12:33:27.713257 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.713264 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.713271 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.713277 | orchestrator | 2025-04-05 12:33:27.713284 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-05 12:33:27.713291 | orchestrator | Saturday 05 April 2025 12:26:01 +0000 (0:00:00.486) 0:04:08.206 ******** 2025-04-05 12:33:27.713297 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.713304 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.713310 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.713317 | orchestrator | 2025-04-05 12:33:27.713324 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-05 12:33:27.713364 | orchestrator | Saturday 05 April 2025 12:26:01 +0000 (0:00:00.274) 0:04:08.481 ******** 2025-04-05 12:33:27.713373 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.713380 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.713387 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.713393 | orchestrator | 2025-04-05 12:33:27.713400 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-05 12:33:27.713407 | orchestrator | Saturday 05 April 2025 12:26:02 +0000 (0:00:00.258) 0:04:08.739 ******** 2025-04-05 12:33:27.713413 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.713420 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.713427 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.713433 | orchestrator | 2025-04-05 12:33:27.713440 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-05 12:33:27.713447 | orchestrator | Saturday 05 April 2025 12:26:02 +0000 (0:00:00.246) 0:04:08.985 ******** 2025-04-05 12:33:27.713453 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.713460 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.713467 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.713473 | orchestrator | 2025-04-05 12:33:27.713480 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-05 12:33:27.713487 | orchestrator | Saturday 05 April 2025 12:26:02 +0000 (0:00:00.398) 0:04:09.383 ******** 2025-04-05 12:33:27.713494 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.713500 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.713507 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.713514 | orchestrator | 2025-04-05 12:33:27.713520 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-05 12:33:27.713527 | orchestrator | Saturday 05 April 2025 12:26:02 +0000 (0:00:00.304) 0:04:09.688 ******** 2025-04-05 12:33:27.713534 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-05 12:33:27.713540 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-05 12:33:27.713547 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-05 12:33:27.713558 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.713565 | orchestrator | 2025-04-05 12:33:27.713572 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-05 12:33:27.713578 | orchestrator | Saturday 05 April 2025 12:26:03 +0000 (0:00:00.388) 0:04:10.077 ******** 2025-04-05 12:33:27.713585 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-05 12:33:27.713592 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-05 12:33:27.713598 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-05 12:33:27.713605 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.713612 | orchestrator | 2025-04-05 12:33:27.713619 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-05 12:33:27.713625 | orchestrator | Saturday 05 April 2025 12:26:03 +0000 (0:00:00.407) 0:04:10.484 ******** 2025-04-05 12:33:27.713632 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-05 12:33:27.713639 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-05 12:33:27.713646 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-05 12:33:27.713652 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.713662 | orchestrator | 2025-04-05 12:33:27.713669 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:33:27.713676 | orchestrator | Saturday 05 April 2025 12:26:04 +0000 (0:00:00.431) 0:04:10.916 ******** 2025-04-05 12:33:27.713682 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.713689 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.713696 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.713703 | orchestrator | 2025-04-05 12:33:27.713709 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-05 12:33:27.713716 | orchestrator | Saturday 05 April 2025 12:26:04 +0000 (0:00:00.286) 0:04:11.203 ******** 2025-04-05 12:33:27.713723 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-05 12:33:27.713730 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-05 12:33:27.713736 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.713743 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.713761 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-05 12:33:27.713767 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.713773 | orchestrator | 2025-04-05 12:33:27.713779 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-05 12:33:27.713786 | orchestrator | Saturday 05 April 2025 12:26:05 +0000 (0:00:00.706) 0:04:11.909 ******** 2025-04-05 12:33:27.713792 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.713798 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.713804 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.713810 | orchestrator | 2025-04-05 12:33:27.713817 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:33:27.713826 | orchestrator | Saturday 05 April 2025 12:26:05 +0000 (0:00:00.297) 0:04:12.206 ******** 2025-04-05 12:33:27.713832 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.713838 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.713844 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.713850 | orchestrator | 2025-04-05 12:33:27.713856 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-05 12:33:27.713862 | orchestrator | Saturday 05 April 2025 12:26:05 +0000 (0:00:00.295) 0:04:12.501 ******** 2025-04-05 12:33:27.713869 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-05 12:33:27.713875 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.713881 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-05 12:33:27.713887 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.713893 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-05 12:33:27.713899 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.713905 | orchestrator | 2025-04-05 12:33:27.713912 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-05 12:33:27.713922 | orchestrator | Saturday 05 April 2025 12:26:06 +0000 (0:00:00.433) 0:04:12.935 ******** 2025-04-05 12:33:27.713929 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.713935 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.713957 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.713965 | orchestrator | 2025-04-05 12:33:27.713971 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-05 12:33:27.713978 | orchestrator | Saturday 05 April 2025 12:26:06 +0000 (0:00:00.514) 0:04:13.450 ******** 2025-04-05 12:33:27.713984 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-05 12:33:27.713990 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-05 12:33:27.713996 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-05 12:33:27.714003 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.714010 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-05 12:33:27.714035 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-05 12:33:27.714042 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-05 12:33:27.714049 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.714060 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-05 12:33:27.714066 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-05 12:33:27.714073 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-05 12:33:27.714080 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.714087 | orchestrator | 2025-04-05 12:33:27.714094 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-05 12:33:27.714101 | orchestrator | Saturday 05 April 2025 12:26:07 +0000 (0:00:00.699) 0:04:14.150 ******** 2025-04-05 12:33:27.714108 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.714115 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.714122 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.714129 | orchestrator | 2025-04-05 12:33:27.714136 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-05 12:33:27.714143 | orchestrator | Saturday 05 April 2025 12:26:08 +0000 (0:00:00.559) 0:04:14.709 ******** 2025-04-05 12:33:27.714150 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.714157 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.714164 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.714171 | orchestrator | 2025-04-05 12:33:27.714178 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-05 12:33:27.714185 | orchestrator | Saturday 05 April 2025 12:26:08 +0000 (0:00:00.564) 0:04:15.273 ******** 2025-04-05 12:33:27.714191 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.714198 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.714205 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.714212 | orchestrator | 2025-04-05 12:33:27.714219 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-05 12:33:27.714226 | orchestrator | Saturday 05 April 2025 12:26:09 +0000 (0:00:00.674) 0:04:15.947 ******** 2025-04-05 12:33:27.714233 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.714240 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.714247 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.714254 | orchestrator | 2025-04-05 12:33:27.714261 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-04-05 12:33:27.714268 | orchestrator | Saturday 05 April 2025 12:26:09 +0000 (0:00:00.496) 0:04:16.443 ******** 2025-04-05 12:33:27.714275 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.714282 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.714289 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.714296 | orchestrator | 2025-04-05 12:33:27.714303 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-04-05 12:33:27.714310 | orchestrator | Saturday 05 April 2025 12:26:10 +0000 (0:00:00.292) 0:04:16.736 ******** 2025-04-05 12:33:27.714321 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.714328 | orchestrator | 2025-04-05 12:33:27.714335 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-04-05 12:33:27.714342 | orchestrator | Saturday 05 April 2025 12:26:10 +0000 (0:00:00.602) 0:04:17.338 ******** 2025-04-05 12:33:27.714349 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.714356 | orchestrator | 2025-04-05 12:33:27.714363 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-04-05 12:33:27.714369 | orchestrator | Saturday 05 April 2025 12:26:10 +0000 (0:00:00.130) 0:04:17.469 ******** 2025-04-05 12:33:27.714375 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-04-05 12:33:27.714381 | orchestrator | 2025-04-05 12:33:27.714387 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-04-05 12:33:27.714394 | orchestrator | Saturday 05 April 2025 12:26:11 +0000 (0:00:00.793) 0:04:18.263 ******** 2025-04-05 12:33:27.714400 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.714406 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.714412 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.714418 | orchestrator | 2025-04-05 12:33:27.714424 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-04-05 12:33:27.714430 | orchestrator | Saturday 05 April 2025 12:26:11 +0000 (0:00:00.258) 0:04:18.521 ******** 2025-04-05 12:33:27.714436 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.714443 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.714448 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.714455 | orchestrator | 2025-04-05 12:33:27.714461 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-04-05 12:33:27.714467 | orchestrator | Saturday 05 April 2025 12:26:12 +0000 (0:00:00.461) 0:04:18.983 ******** 2025-04-05 12:33:27.714473 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.714479 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.714485 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.714491 | orchestrator | 2025-04-05 12:33:27.714497 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-04-05 12:33:27.714503 | orchestrator | Saturday 05 April 2025 12:26:13 +0000 (0:00:01.247) 0:04:20.231 ******** 2025-04-05 12:33:27.714509 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.714516 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.714522 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.714528 | orchestrator | 2025-04-05 12:33:27.714537 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-04-05 12:33:27.714559 | orchestrator | Saturday 05 April 2025 12:26:14 +0000 (0:00:00.692) 0:04:20.924 ******** 2025-04-05 12:33:27.714567 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.714573 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.714579 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.714585 | orchestrator | 2025-04-05 12:33:27.714591 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-04-05 12:33:27.714598 | orchestrator | Saturday 05 April 2025 12:26:14 +0000 (0:00:00.607) 0:04:21.531 ******** 2025-04-05 12:33:27.714604 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.714610 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.714616 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.714623 | orchestrator | 2025-04-05 12:33:27.714629 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-04-05 12:33:27.714635 | orchestrator | Saturday 05 April 2025 12:26:15 +0000 (0:00:00.866) 0:04:22.397 ******** 2025-04-05 12:33:27.714641 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.714648 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.714654 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.714660 | orchestrator | 2025-04-05 12:33:27.714666 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-04-05 12:33:27.714676 | orchestrator | Saturday 05 April 2025 12:26:15 +0000 (0:00:00.271) 0:04:22.669 ******** 2025-04-05 12:33:27.714682 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.714689 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.714695 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.714701 | orchestrator | 2025-04-05 12:33:27.714707 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-04-05 12:33:27.714713 | orchestrator | Saturday 05 April 2025 12:26:16 +0000 (0:00:00.300) 0:04:22.969 ******** 2025-04-05 12:33:27.714720 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.714726 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.714732 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.714738 | orchestrator | 2025-04-05 12:33:27.714777 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-04-05 12:33:27.714784 | orchestrator | Saturday 05 April 2025 12:26:16 +0000 (0:00:00.372) 0:04:23.342 ******** 2025-04-05 12:33:27.714790 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.714796 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.714803 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.714809 | orchestrator | 2025-04-05 12:33:27.714815 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-04-05 12:33:27.714821 | orchestrator | Saturday 05 April 2025 12:26:16 +0000 (0:00:00.260) 0:04:23.602 ******** 2025-04-05 12:33:27.714828 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.714834 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.714840 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.714846 | orchestrator | 2025-04-05 12:33:27.714852 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-04-05 12:33:27.714858 | orchestrator | Saturday 05 April 2025 12:26:18 +0000 (0:00:01.317) 0:04:24.920 ******** 2025-04-05 12:33:27.714865 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.714877 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.714883 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.714889 | orchestrator | 2025-04-05 12:33:27.714896 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-04-05 12:33:27.714902 | orchestrator | Saturday 05 April 2025 12:26:18 +0000 (0:00:00.333) 0:04:25.254 ******** 2025-04-05 12:33:27.714908 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.714914 | orchestrator | 2025-04-05 12:33:27.714921 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-04-05 12:33:27.714927 | orchestrator | Saturday 05 April 2025 12:26:19 +0000 (0:00:00.648) 0:04:25.902 ******** 2025-04-05 12:33:27.714933 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.714939 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.714945 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.714951 | orchestrator | 2025-04-05 12:33:27.714957 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-04-05 12:33:27.714962 | orchestrator | Saturday 05 April 2025 12:26:19 +0000 (0:00:00.293) 0:04:26.195 ******** 2025-04-05 12:33:27.714968 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.714974 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.714980 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.714985 | orchestrator | 2025-04-05 12:33:27.714991 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-04-05 12:33:27.714997 | orchestrator | Saturday 05 April 2025 12:26:19 +0000 (0:00:00.290) 0:04:26.486 ******** 2025-04-05 12:33:27.715004 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.715009 | orchestrator | 2025-04-05 12:33:27.715015 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-04-05 12:33:27.715021 | orchestrator | Saturday 05 April 2025 12:26:20 +0000 (0:00:00.620) 0:04:27.106 ******** 2025-04-05 12:33:27.715027 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.715039 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.715045 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.715050 | orchestrator | 2025-04-05 12:33:27.715056 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-04-05 12:33:27.715062 | orchestrator | Saturday 05 April 2025 12:26:21 +0000 (0:00:01.256) 0:04:28.363 ******** 2025-04-05 12:33:27.715068 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.715074 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.715080 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.715086 | orchestrator | 2025-04-05 12:33:27.715091 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-04-05 12:33:27.715097 | orchestrator | Saturday 05 April 2025 12:26:22 +0000 (0:00:00.974) 0:04:29.337 ******** 2025-04-05 12:33:27.715103 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.715109 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.715115 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.715120 | orchestrator | 2025-04-05 12:33:27.715126 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-04-05 12:33:27.715148 | orchestrator | Saturday 05 April 2025 12:26:24 +0000 (0:00:01.817) 0:04:31.155 ******** 2025-04-05 12:33:27.715155 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.715161 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.715167 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.715173 | orchestrator | 2025-04-05 12:33:27.715182 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-04-05 12:33:27.715188 | orchestrator | Saturday 05 April 2025 12:26:26 +0000 (0:00:02.168) 0:04:33.324 ******** 2025-04-05 12:33:27.715194 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.715200 | orchestrator | 2025-04-05 12:33:27.715207 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-04-05 12:33:27.715217 | orchestrator | Saturday 05 April 2025 12:26:27 +0000 (0:00:00.804) 0:04:34.128 ******** 2025-04-05 12:33:27.715228 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-04-05 12:33:27.715238 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.715247 | orchestrator | 2025-04-05 12:33:27.715257 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-04-05 12:33:27.715265 | orchestrator | Saturday 05 April 2025 12:26:48 +0000 (0:00:21.534) 0:04:55.663 ******** 2025-04-05 12:33:27.715271 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.715277 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.715283 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.715289 | orchestrator | 2025-04-05 12:33:27.715295 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-04-05 12:33:27.715300 | orchestrator | Saturday 05 April 2025 12:26:54 +0000 (0:00:05.653) 0:05:01.317 ******** 2025-04-05 12:33:27.715306 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.715312 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.715318 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.715324 | orchestrator | 2025-04-05 12:33:27.715330 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-05 12:33:27.715335 | orchestrator | Saturday 05 April 2025 12:26:55 +0000 (0:00:01.210) 0:05:02.528 ******** 2025-04-05 12:33:27.715341 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.715347 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.715353 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.715359 | orchestrator | 2025-04-05 12:33:27.715364 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-04-05 12:33:27.715370 | orchestrator | Saturday 05 April 2025 12:26:56 +0000 (0:00:00.679) 0:05:03.208 ******** 2025-04-05 12:33:27.715376 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.715382 | orchestrator | 2025-04-05 12:33:27.715392 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-04-05 12:33:27.715398 | orchestrator | Saturday 05 April 2025 12:26:57 +0000 (0:00:00.786) 0:05:03.994 ******** 2025-04-05 12:33:27.715404 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.715410 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.715416 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.715422 | orchestrator | 2025-04-05 12:33:27.715427 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-04-05 12:33:27.715433 | orchestrator | Saturday 05 April 2025 12:26:57 +0000 (0:00:00.366) 0:05:04.361 ******** 2025-04-05 12:33:27.715439 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.715445 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.715451 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.715457 | orchestrator | 2025-04-05 12:33:27.715462 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-04-05 12:33:27.715468 | orchestrator | Saturday 05 April 2025 12:26:58 +0000 (0:00:01.095) 0:05:05.456 ******** 2025-04-05 12:33:27.715474 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-05 12:33:27.715480 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-05 12:33:27.715486 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-05 12:33:27.715492 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.715498 | orchestrator | 2025-04-05 12:33:27.715503 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-04-05 12:33:27.715509 | orchestrator | Saturday 05 April 2025 12:26:59 +0000 (0:00:01.138) 0:05:06.595 ******** 2025-04-05 12:33:27.715515 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.715521 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.715527 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.715532 | orchestrator | 2025-04-05 12:33:27.715538 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-05 12:33:27.715544 | orchestrator | Saturday 05 April 2025 12:27:00 +0000 (0:00:00.383) 0:05:06.978 ******** 2025-04-05 12:33:27.715550 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.715556 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.715562 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.715567 | orchestrator | 2025-04-05 12:33:27.715573 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-04-05 12:33:27.715579 | orchestrator | 2025-04-05 12:33:27.715585 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-05 12:33:27.715590 | orchestrator | Saturday 05 April 2025 12:27:02 +0000 (0:00:02.070) 0:05:09.049 ******** 2025-04-05 12:33:27.715596 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.715602 | orchestrator | 2025-04-05 12:33:27.715608 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-05 12:33:27.715614 | orchestrator | Saturday 05 April 2025 12:27:03 +0000 (0:00:00.746) 0:05:09.795 ******** 2025-04-05 12:33:27.715620 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.715626 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.715631 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.715637 | orchestrator | 2025-04-05 12:33:27.715643 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-05 12:33:27.715662 | orchestrator | Saturday 05 April 2025 12:27:03 +0000 (0:00:00.705) 0:05:10.501 ******** 2025-04-05 12:33:27.715669 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.715675 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.715681 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.715687 | orchestrator | 2025-04-05 12:33:27.715693 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-05 12:33:27.715699 | orchestrator | Saturday 05 April 2025 12:27:04 +0000 (0:00:00.323) 0:05:10.825 ******** 2025-04-05 12:33:27.715705 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.715711 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.715721 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.715730 | orchestrator | 2025-04-05 12:33:27.715736 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-05 12:33:27.715755 | orchestrator | Saturday 05 April 2025 12:27:04 +0000 (0:00:00.526) 0:05:11.351 ******** 2025-04-05 12:33:27.715762 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.715768 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.715774 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.715780 | orchestrator | 2025-04-05 12:33:27.715785 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-05 12:33:27.715791 | orchestrator | Saturday 05 April 2025 12:27:04 +0000 (0:00:00.330) 0:05:11.682 ******** 2025-04-05 12:33:27.715797 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.715803 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.715809 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.715815 | orchestrator | 2025-04-05 12:33:27.715821 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-05 12:33:27.715827 | orchestrator | Saturday 05 April 2025 12:27:05 +0000 (0:00:00.693) 0:05:12.376 ******** 2025-04-05 12:33:27.715832 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.715838 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.715844 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.715850 | orchestrator | 2025-04-05 12:33:27.715856 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-05 12:33:27.715862 | orchestrator | Saturday 05 April 2025 12:27:05 +0000 (0:00:00.318) 0:05:12.694 ******** 2025-04-05 12:33:27.715868 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.715873 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.715879 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.715885 | orchestrator | 2025-04-05 12:33:27.715891 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-05 12:33:27.715897 | orchestrator | Saturday 05 April 2025 12:27:06 +0000 (0:00:00.545) 0:05:13.240 ******** 2025-04-05 12:33:27.715903 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.715909 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.715914 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.715920 | orchestrator | 2025-04-05 12:33:27.715926 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-05 12:33:27.715932 | orchestrator | Saturday 05 April 2025 12:27:06 +0000 (0:00:00.338) 0:05:13.579 ******** 2025-04-05 12:33:27.715938 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.715944 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.715950 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.715956 | orchestrator | 2025-04-05 12:33:27.715961 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-05 12:33:27.715967 | orchestrator | Saturday 05 April 2025 12:27:07 +0000 (0:00:00.316) 0:05:13.895 ******** 2025-04-05 12:33:27.715973 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.715979 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.715985 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.715991 | orchestrator | 2025-04-05 12:33:27.715997 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-05 12:33:27.716003 | orchestrator | Saturday 05 April 2025 12:27:07 +0000 (0:00:00.331) 0:05:14.227 ******** 2025-04-05 12:33:27.716009 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.716014 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.716020 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.716026 | orchestrator | 2025-04-05 12:33:27.716032 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-05 12:33:27.716038 | orchestrator | Saturday 05 April 2025 12:27:08 +0000 (0:00:00.944) 0:05:15.172 ******** 2025-04-05 12:33:27.716044 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716049 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716055 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716064 | orchestrator | 2025-04-05 12:33:27.716070 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-05 12:33:27.716076 | orchestrator | Saturday 05 April 2025 12:27:08 +0000 (0:00:00.377) 0:05:15.549 ******** 2025-04-05 12:33:27.716082 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.716088 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.716094 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.716099 | orchestrator | 2025-04-05 12:33:27.716105 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-05 12:33:27.716111 | orchestrator | Saturday 05 April 2025 12:27:09 +0000 (0:00:00.407) 0:05:15.957 ******** 2025-04-05 12:33:27.716117 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716123 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716129 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716134 | orchestrator | 2025-04-05 12:33:27.716140 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-05 12:33:27.716146 | orchestrator | Saturday 05 April 2025 12:27:09 +0000 (0:00:00.368) 0:05:16.326 ******** 2025-04-05 12:33:27.716152 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716158 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716164 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716169 | orchestrator | 2025-04-05 12:33:27.716175 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-05 12:33:27.716181 | orchestrator | Saturday 05 April 2025 12:27:10 +0000 (0:00:00.559) 0:05:16.886 ******** 2025-04-05 12:33:27.716187 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716193 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716199 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716205 | orchestrator | 2025-04-05 12:33:27.716211 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-05 12:33:27.716230 | orchestrator | Saturday 05 April 2025 12:27:10 +0000 (0:00:00.330) 0:05:17.216 ******** 2025-04-05 12:33:27.716237 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716243 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716248 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716254 | orchestrator | 2025-04-05 12:33:27.716260 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-05 12:33:27.716266 | orchestrator | Saturday 05 April 2025 12:27:10 +0000 (0:00:00.346) 0:05:17.563 ******** 2025-04-05 12:33:27.716272 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716278 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716283 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716289 | orchestrator | 2025-04-05 12:33:27.716295 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-05 12:33:27.716301 | orchestrator | Saturday 05 April 2025 12:27:11 +0000 (0:00:00.334) 0:05:17.897 ******** 2025-04-05 12:33:27.716307 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.716313 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.716319 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.716325 | orchestrator | 2025-04-05 12:33:27.716331 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-05 12:33:27.716339 | orchestrator | Saturday 05 April 2025 12:27:11 +0000 (0:00:00.557) 0:05:18.455 ******** 2025-04-05 12:33:27.716345 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.716351 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.716360 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.716366 | orchestrator | 2025-04-05 12:33:27.716372 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-05 12:33:27.716378 | orchestrator | Saturday 05 April 2025 12:27:12 +0000 (0:00:00.355) 0:05:18.810 ******** 2025-04-05 12:33:27.716384 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716390 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716396 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716401 | orchestrator | 2025-04-05 12:33:27.716407 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-05 12:33:27.716417 | orchestrator | Saturday 05 April 2025 12:27:12 +0000 (0:00:00.381) 0:05:19.192 ******** 2025-04-05 12:33:27.716423 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716429 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716434 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716440 | orchestrator | 2025-04-05 12:33:27.716446 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-05 12:33:27.716452 | orchestrator | Saturday 05 April 2025 12:27:12 +0000 (0:00:00.325) 0:05:19.517 ******** 2025-04-05 12:33:27.716461 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716467 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716473 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716479 | orchestrator | 2025-04-05 12:33:27.716485 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-05 12:33:27.716490 | orchestrator | Saturday 05 April 2025 12:27:13 +0000 (0:00:00.646) 0:05:20.164 ******** 2025-04-05 12:33:27.716496 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716502 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716508 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716514 | orchestrator | 2025-04-05 12:33:27.716520 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-05 12:33:27.716526 | orchestrator | Saturday 05 April 2025 12:27:13 +0000 (0:00:00.356) 0:05:20.520 ******** 2025-04-05 12:33:27.716532 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716538 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716543 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716549 | orchestrator | 2025-04-05 12:33:27.716555 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-05 12:33:27.716561 | orchestrator | Saturday 05 April 2025 12:27:14 +0000 (0:00:00.287) 0:05:20.808 ******** 2025-04-05 12:33:27.716567 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716573 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716579 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716584 | orchestrator | 2025-04-05 12:33:27.716590 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-05 12:33:27.716596 | orchestrator | Saturday 05 April 2025 12:27:14 +0000 (0:00:00.240) 0:05:21.049 ******** 2025-04-05 12:33:27.716602 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716607 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716613 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716619 | orchestrator | 2025-04-05 12:33:27.716625 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-05 12:33:27.716631 | orchestrator | Saturday 05 April 2025 12:27:14 +0000 (0:00:00.409) 0:05:21.458 ******** 2025-04-05 12:33:27.716637 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716642 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716648 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716654 | orchestrator | 2025-04-05 12:33:27.716660 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-05 12:33:27.716666 | orchestrator | Saturday 05 April 2025 12:27:15 +0000 (0:00:00.295) 0:05:21.754 ******** 2025-04-05 12:33:27.716672 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716677 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716683 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716689 | orchestrator | 2025-04-05 12:33:27.716695 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-05 12:33:27.716701 | orchestrator | Saturday 05 April 2025 12:27:15 +0000 (0:00:00.328) 0:05:22.082 ******** 2025-04-05 12:33:27.716707 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716713 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716718 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716728 | orchestrator | 2025-04-05 12:33:27.716734 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-05 12:33:27.716739 | orchestrator | Saturday 05 April 2025 12:27:15 +0000 (0:00:00.302) 0:05:22.385 ******** 2025-04-05 12:33:27.716756 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716762 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716768 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716774 | orchestrator | 2025-04-05 12:33:27.716793 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-05 12:33:27.716800 | orchestrator | Saturday 05 April 2025 12:27:16 +0000 (0:00:00.453) 0:05:22.838 ******** 2025-04-05 12:33:27.716806 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716812 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716818 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716824 | orchestrator | 2025-04-05 12:33:27.716830 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-05 12:33:27.716836 | orchestrator | Saturday 05 April 2025 12:27:16 +0000 (0:00:00.290) 0:05:23.128 ******** 2025-04-05 12:33:27.716842 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-05 12:33:27.716848 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-05 12:33:27.716854 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716859 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-05 12:33:27.716865 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-05 12:33:27.716871 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716877 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-05 12:33:27.716883 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-05 12:33:27.716889 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716895 | orchestrator | 2025-04-05 12:33:27.716901 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-05 12:33:27.716907 | orchestrator | Saturday 05 April 2025 12:27:16 +0000 (0:00:00.314) 0:05:23.443 ******** 2025-04-05 12:33:27.716913 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-05 12:33:27.716919 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-05 12:33:27.716925 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716930 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-05 12:33:27.716936 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-05 12:33:27.716942 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716948 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-05 12:33:27.716954 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-05 12:33:27.716960 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.716966 | orchestrator | 2025-04-05 12:33:27.716972 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-05 12:33:27.716977 | orchestrator | Saturday 05 April 2025 12:27:17 +0000 (0:00:00.315) 0:05:23.759 ******** 2025-04-05 12:33:27.716983 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.716989 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.716995 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.717001 | orchestrator | 2025-04-05 12:33:27.717007 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-05 12:33:27.717013 | orchestrator | Saturday 05 April 2025 12:27:17 +0000 (0:00:00.421) 0:05:24.180 ******** 2025-04-05 12:33:27.717019 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.717024 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.717030 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.717036 | orchestrator | 2025-04-05 12:33:27.717042 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-05 12:33:27.717048 | orchestrator | Saturday 05 April 2025 12:27:17 +0000 (0:00:00.264) 0:05:24.445 ******** 2025-04-05 12:33:27.717054 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.717064 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.717072 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.717078 | orchestrator | 2025-04-05 12:33:27.717084 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-05 12:33:27.717094 | orchestrator | Saturday 05 April 2025 12:27:17 +0000 (0:00:00.254) 0:05:24.699 ******** 2025-04-05 12:33:27.717100 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.717106 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.717112 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.717118 | orchestrator | 2025-04-05 12:33:27.717123 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-05 12:33:27.717129 | orchestrator | Saturday 05 April 2025 12:27:18 +0000 (0:00:00.265) 0:05:24.964 ******** 2025-04-05 12:33:27.717135 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.717141 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.717147 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.717152 | orchestrator | 2025-04-05 12:33:27.717158 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-05 12:33:27.717164 | orchestrator | Saturday 05 April 2025 12:27:18 +0000 (0:00:00.391) 0:05:25.356 ******** 2025-04-05 12:33:27.717169 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.717175 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.717181 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.717187 | orchestrator | 2025-04-05 12:33:27.717193 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-05 12:33:27.717198 | orchestrator | Saturday 05 April 2025 12:27:18 +0000 (0:00:00.241) 0:05:25.597 ******** 2025-04-05 12:33:27.717204 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-05 12:33:27.717210 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-05 12:33:27.717216 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-05 12:33:27.717222 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.717228 | orchestrator | 2025-04-05 12:33:27.717233 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-05 12:33:27.717239 | orchestrator | Saturday 05 April 2025 12:27:19 +0000 (0:00:00.373) 0:05:25.971 ******** 2025-04-05 12:33:27.717245 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-05 12:33:27.717251 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-05 12:33:27.717257 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-05 12:33:27.717263 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.717269 | orchestrator | 2025-04-05 12:33:27.717287 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-05 12:33:27.717294 | orchestrator | Saturday 05 April 2025 12:27:19 +0000 (0:00:00.312) 0:05:26.283 ******** 2025-04-05 12:33:27.717300 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-05 12:33:27.717306 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-05 12:33:27.717312 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-05 12:33:27.717318 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.717324 | orchestrator | 2025-04-05 12:33:27.717330 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:33:27.717336 | orchestrator | Saturday 05 April 2025 12:27:19 +0000 (0:00:00.314) 0:05:26.597 ******** 2025-04-05 12:33:27.717342 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.717348 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.717354 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.717359 | orchestrator | 2025-04-05 12:33:27.717365 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-05 12:33:27.717374 | orchestrator | Saturday 05 April 2025 12:27:20 +0000 (0:00:00.289) 0:05:26.887 ******** 2025-04-05 12:33:27.717380 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-05 12:33:27.717416 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.717422 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-05 12:33:27.717428 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.717434 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-05 12:33:27.717440 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.717446 | orchestrator | 2025-04-05 12:33:27.717452 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-05 12:33:27.717458 | orchestrator | Saturday 05 April 2025 12:27:20 +0000 (0:00:00.597) 0:05:27.484 ******** 2025-04-05 12:33:27.717463 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.717469 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.717475 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.717481 | orchestrator | 2025-04-05 12:33:27.717487 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:33:27.717493 | orchestrator | Saturday 05 April 2025 12:27:21 +0000 (0:00:00.303) 0:05:27.787 ******** 2025-04-05 12:33:27.717499 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.717505 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.717510 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.717516 | orchestrator | 2025-04-05 12:33:27.717522 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-05 12:33:27.717528 | orchestrator | Saturday 05 April 2025 12:27:21 +0000 (0:00:00.289) 0:05:28.077 ******** 2025-04-05 12:33:27.717534 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-05 12:33:27.717540 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.717546 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-05 12:33:27.717551 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.717557 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-05 12:33:27.717563 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.717569 | orchestrator | 2025-04-05 12:33:27.717575 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-05 12:33:27.717581 | orchestrator | Saturday 05 April 2025 12:27:21 +0000 (0:00:00.395) 0:05:28.472 ******** 2025-04-05 12:33:27.717587 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.717593 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.717599 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.717604 | orchestrator | 2025-04-05 12:33:27.717610 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-05 12:33:27.717616 | orchestrator | Saturday 05 April 2025 12:27:22 +0000 (0:00:00.464) 0:05:28.937 ******** 2025-04-05 12:33:27.717622 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-05 12:33:27.717628 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-05 12:33:27.717634 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-05 12:33:27.717640 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.717646 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-05 12:33:27.717651 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-05 12:33:27.717657 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-05 12:33:27.717663 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.717669 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-05 12:33:27.717675 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-05 12:33:27.717681 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-05 12:33:27.717686 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.717696 | orchestrator | 2025-04-05 12:33:27.717701 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-05 12:33:27.717707 | orchestrator | Saturday 05 April 2025 12:27:22 +0000 (0:00:00.558) 0:05:29.495 ******** 2025-04-05 12:33:27.717713 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.717719 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.717728 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.717734 | orchestrator | 2025-04-05 12:33:27.717740 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-05 12:33:27.717771 | orchestrator | Saturday 05 April 2025 12:27:23 +0000 (0:00:00.596) 0:05:30.091 ******** 2025-04-05 12:33:27.717778 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.717784 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.717790 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.717796 | orchestrator | 2025-04-05 12:33:27.717802 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-05 12:33:27.717808 | orchestrator | Saturday 05 April 2025 12:27:23 +0000 (0:00:00.510) 0:05:30.601 ******** 2025-04-05 12:33:27.717813 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.717819 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.717825 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.717831 | orchestrator | 2025-04-05 12:33:27.717837 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-05 12:33:27.717861 | orchestrator | Saturday 05 April 2025 12:27:24 +0000 (0:00:00.643) 0:05:31.244 ******** 2025-04-05 12:33:27.717868 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.717874 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.717880 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.717886 | orchestrator | 2025-04-05 12:33:27.717892 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-04-05 12:33:27.717898 | orchestrator | Saturday 05 April 2025 12:27:25 +0000 (0:00:00.521) 0:05:31.766 ******** 2025-04-05 12:33:27.717904 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-05 12:33:27.717910 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-05 12:33:27.717916 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-05 12:33:27.717922 | orchestrator | 2025-04-05 12:33:27.717928 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-04-05 12:33:27.717934 | orchestrator | Saturday 05 April 2025 12:27:25 +0000 (0:00:00.769) 0:05:32.536 ******** 2025-04-05 12:33:27.717940 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.717946 | orchestrator | 2025-04-05 12:33:27.717952 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-04-05 12:33:27.717957 | orchestrator | Saturday 05 April 2025 12:27:26 +0000 (0:00:00.651) 0:05:33.187 ******** 2025-04-05 12:33:27.717963 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.717969 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.717975 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.717981 | orchestrator | 2025-04-05 12:33:27.717987 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-04-05 12:33:27.717993 | orchestrator | Saturday 05 April 2025 12:27:27 +0000 (0:00:00.662) 0:05:33.850 ******** 2025-04-05 12:33:27.717998 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.718003 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.718011 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.718032 | orchestrator | 2025-04-05 12:33:27.718037 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-04-05 12:33:27.718043 | orchestrator | Saturday 05 April 2025 12:27:27 +0000 (0:00:00.327) 0:05:34.178 ******** 2025-04-05 12:33:27.718048 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-05 12:33:27.718054 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-05 12:33:27.718059 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-05 12:33:27.718065 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-04-05 12:33:27.718070 | orchestrator | 2025-04-05 12:33:27.718075 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-04-05 12:33:27.718081 | orchestrator | Saturday 05 April 2025 12:27:34 +0000 (0:00:07.325) 0:05:41.503 ******** 2025-04-05 12:33:27.718090 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.718095 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.718101 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.718106 | orchestrator | 2025-04-05 12:33:27.718111 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-04-05 12:33:27.718117 | orchestrator | Saturday 05 April 2025 12:27:35 +0000 (0:00:00.398) 0:05:41.901 ******** 2025-04-05 12:33:27.718122 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-04-05 12:33:27.718130 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-05 12:33:27.718136 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-05 12:33:27.718141 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-04-05 12:33:27.718147 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:33:27.718152 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:33:27.718157 | orchestrator | 2025-04-05 12:33:27.718163 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-04-05 12:33:27.718168 | orchestrator | Saturday 05 April 2025 12:27:37 +0000 (0:00:01.806) 0:05:43.707 ******** 2025-04-05 12:33:27.718173 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-04-05 12:33:27.718179 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-05 12:33:27.718184 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-05 12:33:27.718189 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-05 12:33:27.718195 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-04-05 12:33:27.718200 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-04-05 12:33:27.718205 | orchestrator | 2025-04-05 12:33:27.718210 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-04-05 12:33:27.718215 | orchestrator | Saturday 05 April 2025 12:27:38 +0000 (0:00:01.200) 0:05:44.907 ******** 2025-04-05 12:33:27.718221 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.718226 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.718231 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.718237 | orchestrator | 2025-04-05 12:33:27.718242 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-04-05 12:33:27.718247 | orchestrator | Saturday 05 April 2025 12:27:38 +0000 (0:00:00.743) 0:05:45.651 ******** 2025-04-05 12:33:27.718252 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.718257 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.718263 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.718268 | orchestrator | 2025-04-05 12:33:27.718273 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-04-05 12:33:27.718278 | orchestrator | Saturday 05 April 2025 12:27:39 +0000 (0:00:00.310) 0:05:45.962 ******** 2025-04-05 12:33:27.718284 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.718289 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.718294 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.718300 | orchestrator | 2025-04-05 12:33:27.718305 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-04-05 12:33:27.718310 | orchestrator | Saturday 05 April 2025 12:27:39 +0000 (0:00:00.570) 0:05:46.532 ******** 2025-04-05 12:33:27.718328 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.718335 | orchestrator | 2025-04-05 12:33:27.718340 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-04-05 12:33:27.718345 | orchestrator | Saturday 05 April 2025 12:27:40 +0000 (0:00:00.551) 0:05:47.084 ******** 2025-04-05 12:33:27.718351 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.718356 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.718362 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.718367 | orchestrator | 2025-04-05 12:33:27.718375 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-04-05 12:33:27.718383 | orchestrator | Saturday 05 April 2025 12:27:40 +0000 (0:00:00.330) 0:05:47.414 ******** 2025-04-05 12:33:27.718391 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.718397 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.718403 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.718408 | orchestrator | 2025-04-05 12:33:27.718413 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-04-05 12:33:27.718418 | orchestrator | Saturday 05 April 2025 12:27:41 +0000 (0:00:00.547) 0:05:47.962 ******** 2025-04-05 12:33:27.718424 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.718429 | orchestrator | 2025-04-05 12:33:27.718435 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-04-05 12:33:27.718440 | orchestrator | Saturday 05 April 2025 12:27:41 +0000 (0:00:00.570) 0:05:48.533 ******** 2025-04-05 12:33:27.718445 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.718451 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.718456 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.718461 | orchestrator | 2025-04-05 12:33:27.718467 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-04-05 12:33:27.718472 | orchestrator | Saturday 05 April 2025 12:27:43 +0000 (0:00:01.355) 0:05:49.888 ******** 2025-04-05 12:33:27.718477 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.718483 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.718488 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.718493 | orchestrator | 2025-04-05 12:33:27.718499 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-04-05 12:33:27.718504 | orchestrator | Saturday 05 April 2025 12:27:44 +0000 (0:00:01.633) 0:05:51.522 ******** 2025-04-05 12:33:27.718510 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.718515 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.718520 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.718526 | orchestrator | 2025-04-05 12:33:27.718531 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-04-05 12:33:27.718536 | orchestrator | Saturday 05 April 2025 12:27:46 +0000 (0:00:01.975) 0:05:53.498 ******** 2025-04-05 12:33:27.718542 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.718547 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.718552 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.718557 | orchestrator | 2025-04-05 12:33:27.718563 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-04-05 12:33:27.718568 | orchestrator | Saturday 05 April 2025 12:27:48 +0000 (0:00:01.910) 0:05:55.408 ******** 2025-04-05 12:33:27.718573 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.718579 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.718584 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-04-05 12:33:27.718589 | orchestrator | 2025-04-05 12:33:27.718595 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-04-05 12:33:27.718600 | orchestrator | Saturday 05 April 2025 12:27:49 +0000 (0:00:00.750) 0:05:56.159 ******** 2025-04-05 12:33:27.718605 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-04-05 12:33:27.718611 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-04-05 12:33:27.718616 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-04-05 12:33:27.718621 | orchestrator | 2025-04-05 12:33:27.718627 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-04-05 12:33:27.718632 | orchestrator | Saturday 05 April 2025 12:28:02 +0000 (0:00:13.300) 0:06:09.459 ******** 2025-04-05 12:33:27.718638 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-04-05 12:33:27.718643 | orchestrator | 2025-04-05 12:33:27.718648 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-04-05 12:33:27.718654 | orchestrator | Saturday 05 April 2025 12:28:04 +0000 (0:00:01.584) 0:06:11.044 ******** 2025-04-05 12:33:27.718662 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.718668 | orchestrator | 2025-04-05 12:33:27.718673 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-04-05 12:33:27.718678 | orchestrator | Saturday 05 April 2025 12:28:04 +0000 (0:00:00.649) 0:06:11.693 ******** 2025-04-05 12:33:27.718683 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.718689 | orchestrator | 2025-04-05 12:33:27.718694 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-04-05 12:33:27.718699 | orchestrator | Saturday 05 April 2025 12:28:05 +0000 (0:00:00.322) 0:06:12.015 ******** 2025-04-05 12:33:27.718705 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-04-05 12:33:27.718710 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-04-05 12:33:27.718715 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-04-05 12:33:27.718720 | orchestrator | 2025-04-05 12:33:27.718725 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-04-05 12:33:27.718731 | orchestrator | Saturday 05 April 2025 12:28:11 +0000 (0:00:06.103) 0:06:18.119 ******** 2025-04-05 12:33:27.718736 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-04-05 12:33:27.718775 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-04-05 12:33:27.718782 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-04-05 12:33:27.718788 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-04-05 12:33:27.718793 | orchestrator | 2025-04-05 12:33:27.718798 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-05 12:33:27.718807 | orchestrator | Saturday 05 April 2025 12:28:16 +0000 (0:00:04.816) 0:06:22.936 ******** 2025-04-05 12:33:27.718812 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.718818 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.718823 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.718828 | orchestrator | 2025-04-05 12:33:27.718833 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-04-05 12:33:27.718839 | orchestrator | Saturday 05 April 2025 12:28:16 +0000 (0:00:00.704) 0:06:23.640 ******** 2025-04-05 12:33:27.718844 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.718849 | orchestrator | 2025-04-05 12:33:27.718855 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-04-05 12:33:27.718860 | orchestrator | Saturday 05 April 2025 12:28:17 +0000 (0:00:00.760) 0:06:24.401 ******** 2025-04-05 12:33:27.718866 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.718871 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.718876 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.718882 | orchestrator | 2025-04-05 12:33:27.718887 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-04-05 12:33:27.718892 | orchestrator | Saturday 05 April 2025 12:28:18 +0000 (0:00:00.348) 0:06:24.749 ******** 2025-04-05 12:33:27.718898 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.718903 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.718909 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.718914 | orchestrator | 2025-04-05 12:33:27.718919 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-04-05 12:33:27.718925 | orchestrator | Saturday 05 April 2025 12:28:19 +0000 (0:00:01.270) 0:06:26.020 ******** 2025-04-05 12:33:27.718930 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-05 12:33:27.718935 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-05 12:33:27.718941 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-05 12:33:27.718946 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.718951 | orchestrator | 2025-04-05 12:33:27.718957 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-04-05 12:33:27.718966 | orchestrator | Saturday 05 April 2025 12:28:20 +0000 (0:00:00.929) 0:06:26.949 ******** 2025-04-05 12:33:27.718971 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.718976 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.718982 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.718987 | orchestrator | 2025-04-05 12:33:27.718992 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-05 12:33:27.718998 | orchestrator | Saturday 05 April 2025 12:28:20 +0000 (0:00:00.609) 0:06:27.558 ******** 2025-04-05 12:33:27.719003 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.719008 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.719014 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.719039 | orchestrator | 2025-04-05 12:33:27.719045 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-04-05 12:33:27.719051 | orchestrator | 2025-04-05 12:33:27.719056 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-05 12:33:27.719062 | orchestrator | Saturday 05 April 2025 12:28:22 +0000 (0:00:02.049) 0:06:29.608 ******** 2025-04-05 12:33:27.719067 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.719073 | orchestrator | 2025-04-05 12:33:27.719078 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-05 12:33:27.719084 | orchestrator | Saturday 05 April 2025 12:28:23 +0000 (0:00:00.743) 0:06:30.351 ******** 2025-04-05 12:33:27.719089 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.719095 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.719100 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.719106 | orchestrator | 2025-04-05 12:33:27.719111 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-05 12:33:27.719116 | orchestrator | Saturday 05 April 2025 12:28:23 +0000 (0:00:00.330) 0:06:30.682 ******** 2025-04-05 12:33:27.719122 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.719127 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.719132 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.719137 | orchestrator | 2025-04-05 12:33:27.719143 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-05 12:33:27.719148 | orchestrator | Saturday 05 April 2025 12:28:24 +0000 (0:00:00.830) 0:06:31.512 ******** 2025-04-05 12:33:27.719154 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.719159 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.719164 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.719170 | orchestrator | 2025-04-05 12:33:27.719175 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-05 12:33:27.719180 | orchestrator | Saturday 05 April 2025 12:28:25 +0000 (0:00:00.773) 0:06:32.286 ******** 2025-04-05 12:33:27.719185 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.719191 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.719196 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.719201 | orchestrator | 2025-04-05 12:33:27.719206 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-05 12:33:27.719212 | orchestrator | Saturday 05 April 2025 12:28:26 +0000 (0:00:01.074) 0:06:33.361 ******** 2025-04-05 12:33:27.719217 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.719222 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.719228 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.719233 | orchestrator | 2025-04-05 12:33:27.719238 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-05 12:33:27.719243 | orchestrator | Saturday 05 April 2025 12:28:26 +0000 (0:00:00.318) 0:06:33.679 ******** 2025-04-05 12:33:27.719261 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.719268 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.719273 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.719278 | orchestrator | 2025-04-05 12:33:27.719283 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-05 12:33:27.719294 | orchestrator | Saturday 05 April 2025 12:28:27 +0000 (0:00:00.331) 0:06:34.010 ******** 2025-04-05 12:33:27.719300 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.719305 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.719310 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.719315 | orchestrator | 2025-04-05 12:33:27.719321 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-05 12:33:27.719329 | orchestrator | Saturday 05 April 2025 12:28:27 +0000 (0:00:00.322) 0:06:34.333 ******** 2025-04-05 12:33:27.719334 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.719340 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.719345 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.719350 | orchestrator | 2025-04-05 12:33:27.719356 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-05 12:33:27.719361 | orchestrator | Saturday 05 April 2025 12:28:28 +0000 (0:00:00.591) 0:06:34.924 ******** 2025-04-05 12:33:27.719366 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.719371 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.719377 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.719382 | orchestrator | 2025-04-05 12:33:27.719387 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-05 12:33:27.719393 | orchestrator | Saturday 05 April 2025 12:28:28 +0000 (0:00:00.318) 0:06:35.242 ******** 2025-04-05 12:33:27.719398 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.719403 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.719409 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.719414 | orchestrator | 2025-04-05 12:33:27.719419 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-05 12:33:27.719425 | orchestrator | Saturday 05 April 2025 12:28:28 +0000 (0:00:00.296) 0:06:35.539 ******** 2025-04-05 12:33:27.719430 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.719435 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.719440 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.719446 | orchestrator | 2025-04-05 12:33:27.719451 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-05 12:33:27.719456 | orchestrator | Saturday 05 April 2025 12:28:29 +0000 (0:00:00.610) 0:06:36.150 ******** 2025-04-05 12:33:27.719462 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.719467 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.719472 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.719478 | orchestrator | 2025-04-05 12:33:27.719483 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-05 12:33:27.719488 | orchestrator | Saturday 05 April 2025 12:28:29 +0000 (0:00:00.532) 0:06:36.682 ******** 2025-04-05 12:33:27.719493 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.719499 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.719504 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.719509 | orchestrator | 2025-04-05 12:33:27.719515 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-05 12:33:27.719520 | orchestrator | Saturday 05 April 2025 12:28:30 +0000 (0:00:00.307) 0:06:36.989 ******** 2025-04-05 12:33:27.719526 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.719531 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.719536 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.719541 | orchestrator | 2025-04-05 12:33:27.719547 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-05 12:33:27.719552 | orchestrator | Saturday 05 April 2025 12:28:30 +0000 (0:00:00.360) 0:06:37.350 ******** 2025-04-05 12:33:27.719557 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.719563 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.719568 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.719573 | orchestrator | 2025-04-05 12:33:27.719579 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-05 12:33:27.719587 | orchestrator | Saturday 05 April 2025 12:28:30 +0000 (0:00:00.325) 0:06:37.675 ******** 2025-04-05 12:33:27.719592 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.719598 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.719603 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.719608 | orchestrator | 2025-04-05 12:33:27.719613 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-05 12:33:27.719619 | orchestrator | Saturday 05 April 2025 12:28:31 +0000 (0:00:00.561) 0:06:38.237 ******** 2025-04-05 12:33:27.719624 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.719630 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.719637 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.719643 | orchestrator | 2025-04-05 12:33:27.719648 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-05 12:33:27.719653 | orchestrator | Saturday 05 April 2025 12:28:31 +0000 (0:00:00.321) 0:06:38.558 ******** 2025-04-05 12:33:27.719659 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.719664 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.719669 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.719674 | orchestrator | 2025-04-05 12:33:27.719680 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-05 12:33:27.719685 | orchestrator | Saturday 05 April 2025 12:28:32 +0000 (0:00:00.383) 0:06:38.942 ******** 2025-04-05 12:33:27.719690 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.719695 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.719701 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.719706 | orchestrator | 2025-04-05 12:33:27.719711 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-05 12:33:27.719716 | orchestrator | Saturday 05 April 2025 12:28:32 +0000 (0:00:00.323) 0:06:39.265 ******** 2025-04-05 12:33:27.719721 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.719727 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.719732 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.719737 | orchestrator | 2025-04-05 12:33:27.719742 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-05 12:33:27.719774 | orchestrator | Saturday 05 April 2025 12:28:33 +0000 (0:00:00.627) 0:06:39.893 ******** 2025-04-05 12:33:27.719780 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.719799 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.719805 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.719811 | orchestrator | 2025-04-05 12:33:27.719816 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-05 12:33:27.719821 | orchestrator | Saturday 05 April 2025 12:28:33 +0000 (0:00:00.326) 0:06:40.219 ******** 2025-04-05 12:33:27.719827 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.719832 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.719838 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.719843 | orchestrator | 2025-04-05 12:33:27.719848 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-05 12:33:27.719854 | orchestrator | Saturday 05 April 2025 12:28:33 +0000 (0:00:00.317) 0:06:40.537 ******** 2025-04-05 12:33:27.719859 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.719864 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.719869 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.719875 | orchestrator | 2025-04-05 12:33:27.719880 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-05 12:33:27.719888 | orchestrator | Saturday 05 April 2025 12:28:34 +0000 (0:00:00.387) 0:06:40.925 ******** 2025-04-05 12:33:27.719894 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.719899 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.719904 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.719910 | orchestrator | 2025-04-05 12:33:27.719915 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-05 12:33:27.719920 | orchestrator | Saturday 05 April 2025 12:28:34 +0000 (0:00:00.588) 0:06:41.513 ******** 2025-04-05 12:33:27.719929 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.719935 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.719940 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.719945 | orchestrator | 2025-04-05 12:33:27.719951 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-05 12:33:27.719956 | orchestrator | Saturday 05 April 2025 12:28:35 +0000 (0:00:00.346) 0:06:41.860 ******** 2025-04-05 12:33:27.719961 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.719967 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.719972 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.719977 | orchestrator | 2025-04-05 12:33:27.719982 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-05 12:33:27.719988 | orchestrator | Saturday 05 April 2025 12:28:35 +0000 (0:00:00.360) 0:06:42.221 ******** 2025-04-05 12:33:27.719993 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.719998 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720004 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720009 | orchestrator | 2025-04-05 12:33:27.720014 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-05 12:33:27.720020 | orchestrator | Saturday 05 April 2025 12:28:35 +0000 (0:00:00.327) 0:06:42.548 ******** 2025-04-05 12:33:27.720025 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720031 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720036 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720041 | orchestrator | 2025-04-05 12:33:27.720047 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-05 12:33:27.720052 | orchestrator | Saturday 05 April 2025 12:28:36 +0000 (0:00:00.622) 0:06:43.171 ******** 2025-04-05 12:33:27.720057 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720063 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720068 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720073 | orchestrator | 2025-04-05 12:33:27.720079 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-05 12:33:27.720084 | orchestrator | Saturday 05 April 2025 12:28:36 +0000 (0:00:00.351) 0:06:43.522 ******** 2025-04-05 12:33:27.720090 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720095 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720100 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720105 | orchestrator | 2025-04-05 12:33:27.720111 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-05 12:33:27.720116 | orchestrator | Saturday 05 April 2025 12:28:37 +0000 (0:00:00.363) 0:06:43.885 ******** 2025-04-05 12:33:27.720122 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720127 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720132 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720137 | orchestrator | 2025-04-05 12:33:27.720143 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-05 12:33:27.720148 | orchestrator | Saturday 05 April 2025 12:28:37 +0000 (0:00:00.410) 0:06:44.296 ******** 2025-04-05 12:33:27.720153 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720159 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720164 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720169 | orchestrator | 2025-04-05 12:33:27.720174 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-05 12:33:27.720180 | orchestrator | Saturday 05 April 2025 12:28:38 +0000 (0:00:00.700) 0:06:44.996 ******** 2025-04-05 12:33:27.720185 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-05 12:33:27.720190 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-05 12:33:27.720195 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-05 12:33:27.720201 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-05 12:33:27.720206 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720215 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720220 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-05 12:33:27.720225 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-05 12:33:27.720231 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720236 | orchestrator | 2025-04-05 12:33:27.720241 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-05 12:33:27.720246 | orchestrator | Saturday 05 April 2025 12:28:38 +0000 (0:00:00.495) 0:06:45.492 ******** 2025-04-05 12:33:27.720252 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-05 12:33:27.720257 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-05 12:33:27.720274 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720281 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-05 12:33:27.720286 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-05 12:33:27.720291 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720299 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-05 12:33:27.720303 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-05 12:33:27.720308 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720313 | orchestrator | 2025-04-05 12:33:27.720318 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-05 12:33:27.720323 | orchestrator | Saturday 05 April 2025 12:28:39 +0000 (0:00:00.408) 0:06:45.901 ******** 2025-04-05 12:33:27.720328 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720333 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720338 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720343 | orchestrator | 2025-04-05 12:33:27.720347 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-05 12:33:27.720352 | orchestrator | Saturday 05 April 2025 12:28:39 +0000 (0:00:00.338) 0:06:46.239 ******** 2025-04-05 12:33:27.720357 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720362 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720367 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720372 | orchestrator | 2025-04-05 12:33:27.720377 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-05 12:33:27.720382 | orchestrator | Saturday 05 April 2025 12:28:40 +0000 (0:00:00.576) 0:06:46.815 ******** 2025-04-05 12:33:27.720386 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720391 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720396 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720401 | orchestrator | 2025-04-05 12:33:27.720406 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-05 12:33:27.720411 | orchestrator | Saturday 05 April 2025 12:28:40 +0000 (0:00:00.385) 0:06:47.200 ******** 2025-04-05 12:33:27.720416 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720420 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720425 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720432 | orchestrator | 2025-04-05 12:33:27.720437 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-05 12:33:27.720442 | orchestrator | Saturday 05 April 2025 12:28:40 +0000 (0:00:00.322) 0:06:47.522 ******** 2025-04-05 12:33:27.720447 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720452 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720457 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720462 | orchestrator | 2025-04-05 12:33:27.720467 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-05 12:33:27.720472 | orchestrator | Saturday 05 April 2025 12:28:41 +0000 (0:00:00.337) 0:06:47.859 ******** 2025-04-05 12:33:27.720476 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720481 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720486 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720495 | orchestrator | 2025-04-05 12:33:27.720500 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-05 12:33:27.720505 | orchestrator | Saturday 05 April 2025 12:28:41 +0000 (0:00:00.625) 0:06:48.485 ******** 2025-04-05 12:33:27.720509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.720514 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.720519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.720524 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720529 | orchestrator | 2025-04-05 12:33:27.720534 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-05 12:33:27.720539 | orchestrator | Saturday 05 April 2025 12:28:42 +0000 (0:00:00.442) 0:06:48.927 ******** 2025-04-05 12:33:27.720544 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.720551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.720556 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.720561 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720566 | orchestrator | 2025-04-05 12:33:27.720571 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-05 12:33:27.720575 | orchestrator | Saturday 05 April 2025 12:28:42 +0000 (0:00:00.431) 0:06:49.358 ******** 2025-04-05 12:33:27.720580 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.720585 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.720590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.720595 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720600 | orchestrator | 2025-04-05 12:33:27.720605 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:33:27.720609 | orchestrator | Saturday 05 April 2025 12:28:43 +0000 (0:00:00.403) 0:06:49.762 ******** 2025-04-05 12:33:27.720614 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720619 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720624 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720629 | orchestrator | 2025-04-05 12:33:27.720633 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-05 12:33:27.720638 | orchestrator | Saturday 05 April 2025 12:28:43 +0000 (0:00:00.320) 0:06:50.082 ******** 2025-04-05 12:33:27.720643 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-05 12:33:27.720648 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720653 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-05 12:33:27.720657 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720662 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-05 12:33:27.720667 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720672 | orchestrator | 2025-04-05 12:33:27.720677 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-05 12:33:27.720692 | orchestrator | Saturday 05 April 2025 12:28:43 +0000 (0:00:00.431) 0:06:50.514 ******** 2025-04-05 12:33:27.720698 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720703 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720707 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720712 | orchestrator | 2025-04-05 12:33:27.720717 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:33:27.720722 | orchestrator | Saturday 05 April 2025 12:28:44 +0000 (0:00:00.564) 0:06:51.078 ******** 2025-04-05 12:33:27.720727 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720732 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720737 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720742 | orchestrator | 2025-04-05 12:33:27.720757 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-05 12:33:27.720762 | orchestrator | Saturday 05 April 2025 12:28:44 +0000 (0:00:00.310) 0:06:51.389 ******** 2025-04-05 12:33:27.720771 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-05 12:33:27.720776 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720780 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-05 12:33:27.720785 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720790 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-05 12:33:27.720795 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720800 | orchestrator | 2025-04-05 12:33:27.720805 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-05 12:33:27.720810 | orchestrator | Saturday 05 April 2025 12:28:45 +0000 (0:00:00.437) 0:06:51.826 ******** 2025-04-05 12:33:27.720817 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.720822 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720827 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.720832 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720837 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.720842 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720847 | orchestrator | 2025-04-05 12:33:27.720852 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-05 12:33:27.720857 | orchestrator | Saturday 05 April 2025 12:28:45 +0000 (0:00:00.347) 0:06:52.174 ******** 2025-04-05 12:33:27.720861 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.720866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.720871 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.720876 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720881 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-05 12:33:27.720886 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-05 12:33:27.720891 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-05 12:33:27.720896 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720901 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-05 12:33:27.720905 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-05 12:33:27.720910 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-05 12:33:27.720915 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720920 | orchestrator | 2025-04-05 12:33:27.720925 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-05 12:33:27.720930 | orchestrator | Saturday 05 April 2025 12:28:46 +0000 (0:00:00.783) 0:06:52.958 ******** 2025-04-05 12:33:27.720935 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720939 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720944 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720949 | orchestrator | 2025-04-05 12:33:27.720954 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-05 12:33:27.720959 | orchestrator | Saturday 05 April 2025 12:28:46 +0000 (0:00:00.501) 0:06:53.460 ******** 2025-04-05 12:33:27.720964 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-05 12:33:27.720969 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.720973 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-05 12:33:27.720978 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.720983 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-05 12:33:27.720988 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.720993 | orchestrator | 2025-04-05 12:33:27.720998 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-05 12:33:27.721003 | orchestrator | Saturday 05 April 2025 12:28:47 +0000 (0:00:00.711) 0:06:54.171 ******** 2025-04-05 12:33:27.721011 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.721015 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.721023 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.721028 | orchestrator | 2025-04-05 12:33:27.721032 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-05 12:33:27.721037 | orchestrator | Saturday 05 April 2025 12:28:47 +0000 (0:00:00.468) 0:06:54.640 ******** 2025-04-05 12:33:27.721042 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.721047 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.721052 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.721057 | orchestrator | 2025-04-05 12:33:27.721061 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-04-05 12:33:27.721066 | orchestrator | Saturday 05 April 2025 12:28:48 +0000 (0:00:00.640) 0:06:55.281 ******** 2025-04-05 12:33:27.721071 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.721076 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.721081 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.721086 | orchestrator | 2025-04-05 12:33:27.721090 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-04-05 12:33:27.721107 | orchestrator | Saturday 05 April 2025 12:28:48 +0000 (0:00:00.279) 0:06:55.560 ******** 2025-04-05 12:33:27.721113 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-05 12:33:27.721117 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-05 12:33:27.721122 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-05 12:33:27.721127 | orchestrator | 2025-04-05 12:33:27.721132 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-04-05 12:33:27.721141 | orchestrator | Saturday 05 April 2025 12:28:49 +0000 (0:00:00.804) 0:06:56.365 ******** 2025-04-05 12:33:27.721146 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.721150 | orchestrator | 2025-04-05 12:33:27.721155 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-04-05 12:33:27.721160 | orchestrator | Saturday 05 April 2025 12:28:50 +0000 (0:00:00.662) 0:06:57.028 ******** 2025-04-05 12:33:27.721165 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.721170 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.721174 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.721179 | orchestrator | 2025-04-05 12:33:27.721184 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-04-05 12:33:27.721189 | orchestrator | Saturday 05 April 2025 12:28:50 +0000 (0:00:00.274) 0:06:57.302 ******** 2025-04-05 12:33:27.721194 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.721199 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.721204 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.721209 | orchestrator | 2025-04-05 12:33:27.721214 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-04-05 12:33:27.721219 | orchestrator | Saturday 05 April 2025 12:28:50 +0000 (0:00:00.304) 0:06:57.606 ******** 2025-04-05 12:33:27.721223 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.721228 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.721233 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.721238 | orchestrator | 2025-04-05 12:33:27.721243 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-04-05 12:33:27.721248 | orchestrator | Saturday 05 April 2025 12:28:51 +0000 (0:00:00.452) 0:06:58.059 ******** 2025-04-05 12:33:27.721253 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.721257 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.721262 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.721267 | orchestrator | 2025-04-05 12:33:27.721272 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-04-05 12:33:27.721277 | orchestrator | Saturday 05 April 2025 12:28:51 +0000 (0:00:00.300) 0:06:58.360 ******** 2025-04-05 12:33:27.721284 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.721289 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.721294 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.721299 | orchestrator | 2025-04-05 12:33:27.721304 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-04-05 12:33:27.721309 | orchestrator | Saturday 05 April 2025 12:28:52 +0000 (0:00:00.633) 0:06:58.993 ******** 2025-04-05 12:33:27.721314 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.721319 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.721324 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.721329 | orchestrator | 2025-04-05 12:33:27.721333 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-04-05 12:33:27.721338 | orchestrator | Saturday 05 April 2025 12:28:52 +0000 (0:00:00.304) 0:06:59.298 ******** 2025-04-05 12:33:27.721343 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-04-05 12:33:27.721348 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-04-05 12:33:27.721353 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-04-05 12:33:27.721358 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-04-05 12:33:27.721363 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-04-05 12:33:27.721368 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-04-05 12:33:27.721373 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-04-05 12:33:27.721378 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-04-05 12:33:27.721382 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-04-05 12:33:27.721387 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-04-05 12:33:27.721392 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-04-05 12:33:27.721397 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-04-05 12:33:27.721402 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-04-05 12:33:27.721407 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-04-05 12:33:27.721412 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-04-05 12:33:27.721417 | orchestrator | 2025-04-05 12:33:27.721421 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-04-05 12:33:27.721426 | orchestrator | Saturday 05 April 2025 12:28:56 +0000 (0:00:04.118) 0:07:03.417 ******** 2025-04-05 12:33:27.721431 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.721436 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.721441 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.721446 | orchestrator | 2025-04-05 12:33:27.721461 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-04-05 12:33:27.721467 | orchestrator | Saturday 05 April 2025 12:28:56 +0000 (0:00:00.273) 0:07:03.690 ******** 2025-04-05 12:33:27.721472 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.721477 | orchestrator | 2025-04-05 12:33:27.721482 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-04-05 12:33:27.721487 | orchestrator | Saturday 05 April 2025 12:28:57 +0000 (0:00:00.579) 0:07:04.270 ******** 2025-04-05 12:33:27.721491 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-04-05 12:33:27.721496 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-04-05 12:33:27.721501 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-04-05 12:33:27.721509 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-04-05 12:33:27.721514 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-04-05 12:33:27.721518 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-04-05 12:33:27.721523 | orchestrator | 2025-04-05 12:33:27.721528 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-04-05 12:33:27.721535 | orchestrator | Saturday 05 April 2025 12:28:58 +0000 (0:00:01.120) 0:07:05.390 ******** 2025-04-05 12:33:27.721540 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:33:27.721545 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-05 12:33:27.721550 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-05 12:33:27.721554 | orchestrator | 2025-04-05 12:33:27.721559 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-04-05 12:33:27.721564 | orchestrator | Saturday 05 April 2025 12:29:00 +0000 (0:00:01.828) 0:07:07.219 ******** 2025-04-05 12:33:27.721569 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-05 12:33:27.721574 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-05 12:33:27.721579 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.721583 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-05 12:33:27.721588 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-05 12:33:27.721593 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.721598 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-05 12:33:27.721603 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-05 12:33:27.721608 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.721612 | orchestrator | 2025-04-05 12:33:27.721617 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-04-05 12:33:27.721622 | orchestrator | Saturday 05 April 2025 12:29:01 +0000 (0:00:01.071) 0:07:08.290 ******** 2025-04-05 12:33:27.721627 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-05 12:33:27.721632 | orchestrator | 2025-04-05 12:33:27.721637 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-04-05 12:33:27.721642 | orchestrator | Saturday 05 April 2025 12:29:03 +0000 (0:00:01.978) 0:07:10.269 ******** 2025-04-05 12:33:27.721647 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.721652 | orchestrator | 2025-04-05 12:33:27.721657 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-04-05 12:33:27.721662 | orchestrator | Saturday 05 April 2025 12:29:04 +0000 (0:00:00.511) 0:07:10.780 ******** 2025-04-05 12:33:27.721666 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.721671 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.721676 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.721681 | orchestrator | 2025-04-05 12:33:27.721686 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-04-05 12:33:27.721691 | orchestrator | Saturday 05 April 2025 12:29:04 +0000 (0:00:00.287) 0:07:11.068 ******** 2025-04-05 12:33:27.721696 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.721701 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.721706 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.721710 | orchestrator | 2025-04-05 12:33:27.721715 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-04-05 12:33:27.721720 | orchestrator | Saturday 05 April 2025 12:29:04 +0000 (0:00:00.495) 0:07:11.563 ******** 2025-04-05 12:33:27.721725 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.721730 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.721735 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.721740 | orchestrator | 2025-04-05 12:33:27.721756 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-04-05 12:33:27.721764 | orchestrator | Saturday 05 April 2025 12:29:05 +0000 (0:00:00.300) 0:07:11.863 ******** 2025-04-05 12:33:27.721769 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.721774 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.721779 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.721783 | orchestrator | 2025-04-05 12:33:27.721788 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-04-05 12:33:27.721793 | orchestrator | Saturday 05 April 2025 12:29:05 +0000 (0:00:00.307) 0:07:12.171 ******** 2025-04-05 12:33:27.721798 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.721803 | orchestrator | 2025-04-05 12:33:27.721808 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-04-05 12:33:27.721812 | orchestrator | Saturday 05 April 2025 12:29:06 +0000 (0:00:00.782) 0:07:12.953 ******** 2025-04-05 12:33:27.721817 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ad0d437a-29fb-56b5-bf7c-f26bd837f294', 'data_vg': 'ceph-ad0d437a-29fb-56b5-bf7c-f26bd837f294'}) 2025-04-05 12:33:27.721834 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4aac11a6-844c-526d-9ac8-c50cbafa4162', 'data_vg': 'ceph-4aac11a6-844c-526d-9ac8-c50cbafa4162'}) 2025-04-05 12:33:27.721840 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-eb474160-46dc-5c48-a12b-143126b3371a', 'data_vg': 'ceph-eb474160-46dc-5c48-a12b-143126b3371a'}) 2025-04-05 12:33:27.721845 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26', 'data_vg': 'ceph-4ecef128-47ae-5e8f-9b67-b09b9dbd9f26'}) 2025-04-05 12:33:27.721850 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7b2d6610-beab-5485-bcb7-dfee77450e0c', 'data_vg': 'ceph-7b2d6610-beab-5485-bcb7-dfee77450e0c'}) 2025-04-05 12:33:27.721855 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bddbd264-0785-5bf3-9ea2-553c515bd099', 'data_vg': 'ceph-bddbd264-0785-5bf3-9ea2-553c515bd099'}) 2025-04-05 12:33:27.721859 | orchestrator | 2025-04-05 12:33:27.721864 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-04-05 12:33:27.721869 | orchestrator | Saturday 05 April 2025 12:29:34 +0000 (0:00:28.014) 0:07:40.967 ******** 2025-04-05 12:33:27.721874 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.721879 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.721884 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.721889 | orchestrator | 2025-04-05 12:33:27.721894 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-04-05 12:33:27.721899 | orchestrator | Saturday 05 April 2025 12:29:34 +0000 (0:00:00.279) 0:07:41.247 ******** 2025-04-05 12:33:27.721904 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.721909 | orchestrator | 2025-04-05 12:33:27.721913 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-04-05 12:33:27.721918 | orchestrator | Saturday 05 April 2025 12:29:35 +0000 (0:00:00.475) 0:07:41.722 ******** 2025-04-05 12:33:27.721923 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.721928 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.721933 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.721937 | orchestrator | 2025-04-05 12:33:27.721942 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-04-05 12:33:27.721947 | orchestrator | Saturday 05 April 2025 12:29:35 +0000 (0:00:00.721) 0:07:42.443 ******** 2025-04-05 12:33:27.721952 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.721957 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.721962 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.721969 | orchestrator | 2025-04-05 12:33:27.721974 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-04-05 12:33:27.721979 | orchestrator | Saturday 05 April 2025 12:29:37 +0000 (0:00:01.447) 0:07:43.890 ******** 2025-04-05 12:33:27.721988 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.721993 | orchestrator | 2025-04-05 12:33:27.721998 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-04-05 12:33:27.722003 | orchestrator | Saturday 05 April 2025 12:29:37 +0000 (0:00:00.482) 0:07:44.373 ******** 2025-04-05 12:33:27.722007 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.722033 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.722040 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.722045 | orchestrator | 2025-04-05 12:33:27.722050 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-04-05 12:33:27.722054 | orchestrator | Saturday 05 April 2025 12:29:38 +0000 (0:00:01.250) 0:07:45.624 ******** 2025-04-05 12:33:27.722059 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.722064 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.722069 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.722074 | orchestrator | 2025-04-05 12:33:27.722079 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-04-05 12:33:27.722084 | orchestrator | Saturday 05 April 2025 12:29:39 +0000 (0:00:00.985) 0:07:46.609 ******** 2025-04-05 12:33:27.722089 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.722094 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.722099 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.722103 | orchestrator | 2025-04-05 12:33:27.722111 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-04-05 12:33:27.722116 | orchestrator | Saturday 05 April 2025 12:29:41 +0000 (0:00:01.675) 0:07:48.284 ******** 2025-04-05 12:33:27.722121 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722125 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.722130 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.722135 | orchestrator | 2025-04-05 12:33:27.722140 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-04-05 12:33:27.722145 | orchestrator | Saturday 05 April 2025 12:29:42 +0000 (0:00:00.533) 0:07:48.818 ******** 2025-04-05 12:33:27.722150 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722155 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.722159 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.722164 | orchestrator | 2025-04-05 12:33:27.722169 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-04-05 12:33:27.722176 | orchestrator | Saturday 05 April 2025 12:29:42 +0000 (0:00:00.357) 0:07:49.175 ******** 2025-04-05 12:33:27.722181 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-05 12:33:27.722186 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-04-05 12:33:27.722191 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-04-05 12:33:27.722196 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-04-05 12:33:27.722201 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-04-05 12:33:27.722206 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-04-05 12:33:27.722211 | orchestrator | 2025-04-05 12:33:27.722216 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-04-05 12:33:27.722233 | orchestrator | Saturday 05 April 2025 12:29:43 +0000 (0:00:00.936) 0:07:50.112 ******** 2025-04-05 12:33:27.722239 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-04-05 12:33:27.722244 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-04-05 12:33:27.722249 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-04-05 12:33:27.722253 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-04-05 12:33:27.722258 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-04-05 12:33:27.722263 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-04-05 12:33:27.722268 | orchestrator | 2025-04-05 12:33:27.722273 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-04-05 12:33:27.722278 | orchestrator | Saturday 05 April 2025 12:29:46 +0000 (0:00:03.542) 0:07:53.654 ******** 2025-04-05 12:33:27.722286 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722291 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.722295 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-05 12:33:27.722300 | orchestrator | 2025-04-05 12:33:27.722305 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-04-05 12:33:27.722310 | orchestrator | Saturday 05 April 2025 12:29:49 +0000 (0:00:02.499) 0:07:56.154 ******** 2025-04-05 12:33:27.722315 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722320 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.722325 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-04-05 12:33:27.722330 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-05 12:33:27.722335 | orchestrator | 2025-04-05 12:33:27.722340 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-04-05 12:33:27.722345 | orchestrator | Saturday 05 April 2025 12:30:01 +0000 (0:00:12.547) 0:08:08.702 ******** 2025-04-05 12:33:27.722349 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722357 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.722362 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.722367 | orchestrator | 2025-04-05 12:33:27.722371 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-04-05 12:33:27.722376 | orchestrator | Saturday 05 April 2025 12:30:02 +0000 (0:00:00.395) 0:08:09.097 ******** 2025-04-05 12:33:27.722381 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722386 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.722391 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.722396 | orchestrator | 2025-04-05 12:33:27.722400 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-05 12:33:27.722405 | orchestrator | Saturday 05 April 2025 12:30:03 +0000 (0:00:00.919) 0:08:10.017 ******** 2025-04-05 12:33:27.722410 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.722415 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.722420 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.722424 | orchestrator | 2025-04-05 12:33:27.722429 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-04-05 12:33:27.722434 | orchestrator | Saturday 05 April 2025 12:30:04 +0000 (0:00:00.771) 0:08:10.788 ******** 2025-04-05 12:33:27.722439 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.722444 | orchestrator | 2025-04-05 12:33:27.722449 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-04-05 12:33:27.722453 | orchestrator | Saturday 05 April 2025 12:30:04 +0000 (0:00:00.486) 0:08:11.275 ******** 2025-04-05 12:33:27.722458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.722463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.722468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.722473 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722478 | orchestrator | 2025-04-05 12:33:27.722482 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-04-05 12:33:27.722487 | orchestrator | Saturday 05 April 2025 12:30:04 +0000 (0:00:00.374) 0:08:11.650 ******** 2025-04-05 12:33:27.722492 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722497 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.722501 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.722506 | orchestrator | 2025-04-05 12:33:27.722511 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-04-05 12:33:27.722516 | orchestrator | Saturday 05 April 2025 12:30:05 +0000 (0:00:00.468) 0:08:12.118 ******** 2025-04-05 12:33:27.722521 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722526 | orchestrator | 2025-04-05 12:33:27.722531 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-04-05 12:33:27.722538 | orchestrator | Saturday 05 April 2025 12:30:05 +0000 (0:00:00.205) 0:08:12.323 ******** 2025-04-05 12:33:27.722543 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722548 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.722553 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.722558 | orchestrator | 2025-04-05 12:33:27.722563 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-04-05 12:33:27.722568 | orchestrator | Saturday 05 April 2025 12:30:05 +0000 (0:00:00.294) 0:08:12.618 ******** 2025-04-05 12:33:27.722573 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722577 | orchestrator | 2025-04-05 12:33:27.722582 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-04-05 12:33:27.722587 | orchestrator | Saturday 05 April 2025 12:30:06 +0000 (0:00:00.218) 0:08:12.836 ******** 2025-04-05 12:33:27.722592 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722597 | orchestrator | 2025-04-05 12:33:27.722605 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-04-05 12:33:27.722610 | orchestrator | Saturday 05 April 2025 12:30:06 +0000 (0:00:00.213) 0:08:13.050 ******** 2025-04-05 12:33:27.722615 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722619 | orchestrator | 2025-04-05 12:33:27.722624 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-04-05 12:33:27.722629 | orchestrator | Saturday 05 April 2025 12:30:06 +0000 (0:00:00.132) 0:08:13.183 ******** 2025-04-05 12:33:27.722644 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722650 | orchestrator | 2025-04-05 12:33:27.722655 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-04-05 12:33:27.722660 | orchestrator | Saturday 05 April 2025 12:30:06 +0000 (0:00:00.212) 0:08:13.396 ******** 2025-04-05 12:33:27.722665 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722670 | orchestrator | 2025-04-05 12:33:27.722674 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-04-05 12:33:27.722679 | orchestrator | Saturday 05 April 2025 12:30:06 +0000 (0:00:00.211) 0:08:13.607 ******** 2025-04-05 12:33:27.722684 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.722689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.722694 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.722699 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722704 | orchestrator | 2025-04-05 12:33:27.722709 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-04-05 12:33:27.722713 | orchestrator | Saturday 05 April 2025 12:30:07 +0000 (0:00:00.572) 0:08:14.179 ******** 2025-04-05 12:33:27.722718 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722723 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.722731 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.722736 | orchestrator | 2025-04-05 12:33:27.722740 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-04-05 12:33:27.722774 | orchestrator | Saturday 05 April 2025 12:30:07 +0000 (0:00:00.462) 0:08:14.642 ******** 2025-04-05 12:33:27.722780 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722785 | orchestrator | 2025-04-05 12:33:27.722790 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-04-05 12:33:27.722795 | orchestrator | Saturday 05 April 2025 12:30:08 +0000 (0:00:00.217) 0:08:14.859 ******** 2025-04-05 12:33:27.722800 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722804 | orchestrator | 2025-04-05 12:33:27.722809 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-05 12:33:27.722814 | orchestrator | Saturday 05 April 2025 12:30:08 +0000 (0:00:00.207) 0:08:15.066 ******** 2025-04-05 12:33:27.722819 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.722824 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.722829 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.722834 | orchestrator | 2025-04-05 12:33:27.722839 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-04-05 12:33:27.722847 | orchestrator | 2025-04-05 12:33:27.722852 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-05 12:33:27.722857 | orchestrator | Saturday 05 April 2025 12:30:10 +0000 (0:00:02.466) 0:08:17.533 ******** 2025-04-05 12:33:27.722862 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.722867 | orchestrator | 2025-04-05 12:33:27.722872 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-05 12:33:27.722877 | orchestrator | Saturday 05 April 2025 12:30:11 +0000 (0:00:01.162) 0:08:18.696 ******** 2025-04-05 12:33:27.722882 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.722887 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.722892 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.722896 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.722901 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.722906 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.722911 | orchestrator | 2025-04-05 12:33:27.722916 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-05 12:33:27.722921 | orchestrator | Saturday 05 April 2025 12:30:13 +0000 (0:00:01.339) 0:08:20.035 ******** 2025-04-05 12:33:27.722926 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.722931 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.722935 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.722940 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.722945 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.722950 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.722955 | orchestrator | 2025-04-05 12:33:27.722960 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-05 12:33:27.722965 | orchestrator | Saturday 05 April 2025 12:30:14 +0000 (0:00:00.768) 0:08:20.803 ******** 2025-04-05 12:33:27.722970 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.722974 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.722979 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.722984 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.722989 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.722994 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.722999 | orchestrator | 2025-04-05 12:33:27.723004 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-05 12:33:27.723009 | orchestrator | Saturday 05 April 2025 12:30:14 +0000 (0:00:00.884) 0:08:21.687 ******** 2025-04-05 12:33:27.723014 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.723019 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.723023 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.723028 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.723033 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.723038 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.723043 | orchestrator | 2025-04-05 12:33:27.723048 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-05 12:33:27.723053 | orchestrator | Saturday 05 April 2025 12:30:15 +0000 (0:00:00.727) 0:08:22.415 ******** 2025-04-05 12:33:27.723057 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.723062 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.723067 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.723072 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.723077 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.723082 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.723087 | orchestrator | 2025-04-05 12:33:27.723092 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-05 12:33:27.723097 | orchestrator | Saturday 05 April 2025 12:30:17 +0000 (0:00:01.289) 0:08:23.704 ******** 2025-04-05 12:33:27.723102 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.723119 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.723130 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.723135 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.723140 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.723144 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.723149 | orchestrator | 2025-04-05 12:33:27.723154 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-05 12:33:27.723162 | orchestrator | Saturday 05 April 2025 12:30:17 +0000 (0:00:00.690) 0:08:24.395 ******** 2025-04-05 12:33:27.723167 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.723172 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.723176 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.723181 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.723186 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.723191 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.723196 | orchestrator | 2025-04-05 12:33:27.723201 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-05 12:33:27.723206 | orchestrator | Saturday 05 April 2025 12:30:18 +0000 (0:00:00.793) 0:08:25.188 ******** 2025-04-05 12:33:27.723211 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.723215 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.723220 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.723228 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.723232 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.723237 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.723242 | orchestrator | 2025-04-05 12:33:27.723247 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-05 12:33:27.723252 | orchestrator | Saturday 05 April 2025 12:30:19 +0000 (0:00:00.619) 0:08:25.808 ******** 2025-04-05 12:33:27.723257 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.723262 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.723267 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.723272 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.723277 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.723281 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.723286 | orchestrator | 2025-04-05 12:33:27.723291 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-05 12:33:27.723296 | orchestrator | Saturday 05 April 2025 12:30:19 +0000 (0:00:00.810) 0:08:26.619 ******** 2025-04-05 12:33:27.723301 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.723306 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.723310 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.723315 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.723320 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.723325 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.723330 | orchestrator | 2025-04-05 12:33:27.723335 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-05 12:33:27.723339 | orchestrator | Saturday 05 April 2025 12:30:20 +0000 (0:00:00.619) 0:08:27.239 ******** 2025-04-05 12:33:27.723344 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.723349 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.723354 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.723359 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.723364 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.723369 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.723374 | orchestrator | 2025-04-05 12:33:27.723378 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-05 12:33:27.723383 | orchestrator | Saturday 05 April 2025 12:30:21 +0000 (0:00:01.387) 0:08:28.627 ******** 2025-04-05 12:33:27.723388 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.723393 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.723398 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.723403 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.723408 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.723412 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.723421 | orchestrator | 2025-04-05 12:33:27.723426 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-05 12:33:27.723431 | orchestrator | Saturday 05 April 2025 12:30:22 +0000 (0:00:00.630) 0:08:29.257 ******** 2025-04-05 12:33:27.723435 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.723440 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.723445 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.723450 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.723455 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.723460 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.723465 | orchestrator | 2025-04-05 12:33:27.723470 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-05 12:33:27.723475 | orchestrator | Saturday 05 April 2025 12:30:23 +0000 (0:00:00.615) 0:08:29.873 ******** 2025-04-05 12:33:27.723480 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.723484 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.723489 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.723494 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.723499 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.723504 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.723509 | orchestrator | 2025-04-05 12:33:27.723514 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-05 12:33:27.723519 | orchestrator | Saturday 05 April 2025 12:30:23 +0000 (0:00:00.509) 0:08:30.382 ******** 2025-04-05 12:33:27.723524 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.723529 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.723534 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.723539 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.723543 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.723548 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.723553 | orchestrator | 2025-04-05 12:33:27.723558 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-05 12:33:27.723563 | orchestrator | Saturday 05 April 2025 12:30:24 +0000 (0:00:00.696) 0:08:31.079 ******** 2025-04-05 12:33:27.723568 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.723573 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.723578 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.723583 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.723587 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.723592 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.723600 | orchestrator | 2025-04-05 12:33:27.723605 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-05 12:33:27.723620 | orchestrator | Saturday 05 April 2025 12:30:24 +0000 (0:00:00.529) 0:08:31.609 ******** 2025-04-05 12:33:27.723626 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.723631 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.723636 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.723641 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.723646 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.723651 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.723656 | orchestrator | 2025-04-05 12:33:27.723660 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-05 12:33:27.723665 | orchestrator | Saturday 05 April 2025 12:30:25 +0000 (0:00:00.658) 0:08:32.267 ******** 2025-04-05 12:33:27.723670 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.723675 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.723680 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.723685 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.723690 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.723695 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.723699 | orchestrator | 2025-04-05 12:33:27.723704 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-05 12:33:27.723709 | orchestrator | Saturday 05 April 2025 12:30:26 +0000 (0:00:00.529) 0:08:32.797 ******** 2025-04-05 12:33:27.723719 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.723724 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.723728 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.723733 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.723738 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.723743 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.723760 | orchestrator | 2025-04-05 12:33:27.723765 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-05 12:33:27.723770 | orchestrator | Saturday 05 April 2025 12:30:26 +0000 (0:00:00.663) 0:08:33.460 ******** 2025-04-05 12:33:27.723775 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.723779 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.723784 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.723789 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.723794 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.723799 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.723803 | orchestrator | 2025-04-05 12:33:27.723808 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-05 12:33:27.723813 | orchestrator | Saturday 05 April 2025 12:30:27 +0000 (0:00:00.488) 0:08:33.949 ******** 2025-04-05 12:33:27.723818 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.723823 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.723828 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.723832 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.723837 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.723842 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.723847 | orchestrator | 2025-04-05 12:33:27.723852 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-05 12:33:27.723857 | orchestrator | Saturday 05 April 2025 12:30:27 +0000 (0:00:00.714) 0:08:34.664 ******** 2025-04-05 12:33:27.723862 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.723866 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.723871 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.723876 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.723881 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.723886 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.723891 | orchestrator | 2025-04-05 12:33:27.723898 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-05 12:33:27.723903 | orchestrator | Saturday 05 April 2025 12:30:28 +0000 (0:00:00.547) 0:08:35.211 ******** 2025-04-05 12:33:27.723908 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.723913 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.723918 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.723923 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.723927 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.723932 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.723937 | orchestrator | 2025-04-05 12:33:27.723942 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-05 12:33:27.723947 | orchestrator | Saturday 05 April 2025 12:30:29 +0000 (0:00:00.645) 0:08:35.857 ******** 2025-04-05 12:33:27.723952 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.723957 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.723962 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.723966 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.723971 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.723976 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.723981 | orchestrator | 2025-04-05 12:33:27.723986 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-05 12:33:27.723991 | orchestrator | Saturday 05 April 2025 12:30:29 +0000 (0:00:00.507) 0:08:36.364 ******** 2025-04-05 12:33:27.723996 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724000 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.724005 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.724013 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.724020 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.724026 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.724031 | orchestrator | 2025-04-05 12:33:27.724036 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-05 12:33:27.724040 | orchestrator | Saturday 05 April 2025 12:30:30 +0000 (0:00:00.740) 0:08:37.105 ******** 2025-04-05 12:33:27.724045 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724050 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.724055 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.724060 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.724065 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.724070 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.724075 | orchestrator | 2025-04-05 12:33:27.724080 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-05 12:33:27.724085 | orchestrator | Saturday 05 April 2025 12:30:30 +0000 (0:00:00.562) 0:08:37.667 ******** 2025-04-05 12:33:27.724089 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724094 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.724099 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.724104 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.724109 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.724125 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.724131 | orchestrator | 2025-04-05 12:33:27.724136 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-05 12:33:27.724141 | orchestrator | Saturday 05 April 2025 12:30:31 +0000 (0:00:00.726) 0:08:38.394 ******** 2025-04-05 12:33:27.724146 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724151 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.724156 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.724161 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.724166 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.724170 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.724175 | orchestrator | 2025-04-05 12:33:27.724180 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-05 12:33:27.724185 | orchestrator | Saturday 05 April 2025 12:30:32 +0000 (0:00:00.595) 0:08:38.989 ******** 2025-04-05 12:33:27.724192 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724197 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.724202 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.724206 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.724211 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.724216 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.724221 | orchestrator | 2025-04-05 12:33:27.724226 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-05 12:33:27.724231 | orchestrator | Saturday 05 April 2025 12:30:33 +0000 (0:00:00.790) 0:08:39.779 ******** 2025-04-05 12:33:27.724236 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724240 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.724245 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.724250 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.724255 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.724259 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.724264 | orchestrator | 2025-04-05 12:33:27.724269 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-05 12:33:27.724274 | orchestrator | Saturday 05 April 2025 12:30:33 +0000 (0:00:00.634) 0:08:40.413 ******** 2025-04-05 12:33:27.724279 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724284 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.724289 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.724294 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.724298 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.724306 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.724311 | orchestrator | 2025-04-05 12:33:27.724316 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-05 12:33:27.724321 | orchestrator | Saturday 05 April 2025 12:30:34 +0000 (0:00:00.811) 0:08:41.225 ******** 2025-04-05 12:33:27.724326 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724331 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.724336 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.724340 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.724345 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.724350 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.724355 | orchestrator | 2025-04-05 12:33:27.724360 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-05 12:33:27.724365 | orchestrator | Saturday 05 April 2025 12:30:35 +0000 (0:00:00.529) 0:08:41.755 ******** 2025-04-05 12:33:27.724370 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-05 12:33:27.724377 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-05 12:33:27.724382 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-05 12:33:27.724387 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-05 12:33:27.724392 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724397 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-05 12:33:27.724402 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-05 12:33:27.724406 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.724411 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-05 12:33:27.724416 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-05 12:33:27.724421 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.724426 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-05 12:33:27.724431 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-05 12:33:27.724436 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.724440 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.724445 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-05 12:33:27.724450 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-05 12:33:27.724455 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.724462 | orchestrator | 2025-04-05 12:33:27.724467 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-05 12:33:27.724472 | orchestrator | Saturday 05 April 2025 12:30:35 +0000 (0:00:00.644) 0:08:42.399 ******** 2025-04-05 12:33:27.724477 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-05 12:33:27.724482 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-05 12:33:27.724487 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-05 12:33:27.724491 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-05 12:33:27.724496 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724501 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-05 12:33:27.724506 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-05 12:33:27.724511 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.724516 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-05 12:33:27.724521 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-05 12:33:27.724526 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.724531 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-05 12:33:27.724536 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-05 12:33:27.724541 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.724546 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.724561 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-05 12:33:27.724567 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-05 12:33:27.724575 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.724580 | orchestrator | 2025-04-05 12:33:27.724585 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-05 12:33:27.724590 | orchestrator | Saturday 05 April 2025 12:30:36 +0000 (0:00:00.602) 0:08:43.002 ******** 2025-04-05 12:33:27.724595 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724600 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.724604 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.724609 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.724614 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.724619 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.724624 | orchestrator | 2025-04-05 12:33:27.724628 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-05 12:33:27.724633 | orchestrator | Saturday 05 April 2025 12:30:37 +0000 (0:00:00.739) 0:08:43.742 ******** 2025-04-05 12:33:27.724638 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724643 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.724648 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.724653 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.724657 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.724662 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.724667 | orchestrator | 2025-04-05 12:33:27.724672 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-05 12:33:27.724677 | orchestrator | Saturday 05 April 2025 12:30:37 +0000 (0:00:00.601) 0:08:44.343 ******** 2025-04-05 12:33:27.724682 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724687 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.724691 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.724696 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.724701 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.724706 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.724710 | orchestrator | 2025-04-05 12:33:27.724715 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-05 12:33:27.724720 | orchestrator | Saturday 05 April 2025 12:30:38 +0000 (0:00:00.815) 0:08:45.159 ******** 2025-04-05 12:33:27.724725 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724730 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.724735 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.724740 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.724757 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.724763 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.724768 | orchestrator | 2025-04-05 12:33:27.724772 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-05 12:33:27.724777 | orchestrator | Saturday 05 April 2025 12:30:39 +0000 (0:00:00.624) 0:08:45.784 ******** 2025-04-05 12:33:27.724782 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724787 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.724792 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.724797 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.724802 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.724807 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.724811 | orchestrator | 2025-04-05 12:33:27.724816 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-05 12:33:27.724821 | orchestrator | Saturday 05 April 2025 12:30:39 +0000 (0:00:00.828) 0:08:46.612 ******** 2025-04-05 12:33:27.724826 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724831 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.724836 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.724841 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.724846 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.724851 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.724856 | orchestrator | 2025-04-05 12:33:27.724861 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-05 12:33:27.724871 | orchestrator | Saturday 05 April 2025 12:30:40 +0000 (0:00:00.623) 0:08:47.236 ******** 2025-04-05 12:33:27.724876 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.724881 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.724889 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.724894 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724899 | orchestrator | 2025-04-05 12:33:27.724904 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-05 12:33:27.724909 | orchestrator | Saturday 05 April 2025 12:30:40 +0000 (0:00:00.322) 0:08:47.558 ******** 2025-04-05 12:33:27.724914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.724919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.724924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.724929 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724934 | orchestrator | 2025-04-05 12:33:27.724938 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-05 12:33:27.724943 | orchestrator | Saturday 05 April 2025 12:30:41 +0000 (0:00:00.306) 0:08:47.865 ******** 2025-04-05 12:33:27.724948 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.724953 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.724958 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.724963 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724968 | orchestrator | 2025-04-05 12:33:27.724973 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:33:27.724978 | orchestrator | Saturday 05 April 2025 12:30:41 +0000 (0:00:00.462) 0:08:48.328 ******** 2025-04-05 12:33:27.724982 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.724987 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.724992 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.724997 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.725002 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.725018 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.725026 | orchestrator | 2025-04-05 12:33:27.725031 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-05 12:33:27.725036 | orchestrator | Saturday 05 April 2025 12:30:42 +0000 (0:00:00.695) 0:08:49.024 ******** 2025-04-05 12:33:27.725041 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-05 12:33:27.725046 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.725051 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-05 12:33:27.725056 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.725061 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-05 12:33:27.725065 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.725070 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-05 12:33:27.725075 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.725080 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-05 12:33:27.725085 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.725089 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-05 12:33:27.725094 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.725099 | orchestrator | 2025-04-05 12:33:27.725104 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-05 12:33:27.725109 | orchestrator | Saturday 05 April 2025 12:30:43 +0000 (0:00:00.794) 0:08:49.819 ******** 2025-04-05 12:33:27.725113 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.725118 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.725123 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.725128 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.725133 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.725140 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.725145 | orchestrator | 2025-04-05 12:33:27.725150 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:33:27.725155 | orchestrator | Saturday 05 April 2025 12:30:43 +0000 (0:00:00.609) 0:08:50.428 ******** 2025-04-05 12:33:27.725160 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.725164 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.725169 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.725174 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.725179 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.725184 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.725189 | orchestrator | 2025-04-05 12:33:27.725194 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-05 12:33:27.725198 | orchestrator | Saturday 05 April 2025 12:30:44 +0000 (0:00:00.479) 0:08:50.907 ******** 2025-04-05 12:33:27.725203 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-05 12:33:27.725208 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-05 12:33:27.725213 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.725218 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.725223 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-05 12:33:27.725228 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.725232 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-05 12:33:27.725237 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.725242 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-05 12:33:27.725247 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.725252 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-05 12:33:27.725257 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.725262 | orchestrator | 2025-04-05 12:33:27.725267 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-05 12:33:27.725272 | orchestrator | Saturday 05 April 2025 12:30:45 +0000 (0:00:00.937) 0:08:51.845 ******** 2025-04-05 12:33:27.725277 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.725282 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.725287 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.725292 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.725297 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.725302 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.725306 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.725311 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.725316 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.725321 | orchestrator | 2025-04-05 12:33:27.725326 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-05 12:33:27.725331 | orchestrator | Saturday 05 April 2025 12:30:45 +0000 (0:00:00.542) 0:08:52.387 ******** 2025-04-05 12:33:27.725336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.725341 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.725346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.725350 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-05 12:33:27.725355 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-05 12:33:27.725360 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-05 12:33:27.725365 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.725370 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-05 12:33:27.725375 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-05 12:33:27.725382 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.725387 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-05 12:33:27.725394 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-05 12:33:27.725399 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-05 12:33:27.725404 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-05 12:33:27.725409 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.725416 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-05 12:33:27.725421 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-05 12:33:27.725426 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-05 12:33:27.725431 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.725436 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.725440 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-05 12:33:27.725445 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-05 12:33:27.725450 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-05 12:33:27.725455 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.725460 | orchestrator | 2025-04-05 12:33:27.725464 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-05 12:33:27.725469 | orchestrator | Saturday 05 April 2025 12:30:47 +0000 (0:00:01.426) 0:08:53.813 ******** 2025-04-05 12:33:27.725474 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.725479 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.725484 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.725489 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.725493 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.725498 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.725503 | orchestrator | 2025-04-05 12:33:27.725508 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-05 12:33:27.725512 | orchestrator | Saturday 05 April 2025 12:30:48 +0000 (0:00:01.234) 0:08:55.048 ******** 2025-04-05 12:33:27.725517 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-05 12:33:27.725522 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-05 12:33:27.725527 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.725532 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-05 12:33:27.725537 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.725541 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.725546 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.725551 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.725556 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.725560 | orchestrator | 2025-04-05 12:33:27.725565 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-05 12:33:27.725570 | orchestrator | Saturday 05 April 2025 12:30:49 +0000 (0:00:01.022) 0:08:56.070 ******** 2025-04-05 12:33:27.725575 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.725580 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.725585 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.725592 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.725597 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.725602 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.725607 | orchestrator | 2025-04-05 12:33:27.725612 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-05 12:33:27.725617 | orchestrator | Saturday 05 April 2025 12:30:50 +0000 (0:00:01.137) 0:08:57.208 ******** 2025-04-05 12:33:27.725621 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.725626 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.725631 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.725636 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:33:27.725641 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:33:27.725646 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:33:27.725653 | orchestrator | 2025-04-05 12:33:27.725658 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-04-05 12:33:27.725663 | orchestrator | Saturday 05 April 2025 12:30:51 +0000 (0:00:01.042) 0:08:58.250 ******** 2025-04-05 12:33:27.725668 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-05 12:33:27.725673 | orchestrator | 2025-04-05 12:33:27.725678 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-04-05 12:33:27.725683 | orchestrator | Saturday 05 April 2025 12:30:54 +0000 (0:00:02.879) 0:09:01.130 ******** 2025-04-05 12:33:27.725688 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-05 12:33:27.725693 | orchestrator | 2025-04-05 12:33:27.725697 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-04-05 12:33:27.725705 | orchestrator | Saturday 05 April 2025 12:30:55 +0000 (0:00:01.407) 0:09:02.538 ******** 2025-04-05 12:33:27.725710 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.725715 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.725719 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.725724 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.725729 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.725734 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.725739 | orchestrator | 2025-04-05 12:33:27.725755 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-04-05 12:33:27.725760 | orchestrator | Saturday 05 April 2025 12:30:57 +0000 (0:00:01.288) 0:09:03.826 ******** 2025-04-05 12:33:27.725765 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.725770 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.725775 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.725779 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.725784 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.725789 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.725794 | orchestrator | 2025-04-05 12:33:27.725801 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-04-05 12:33:27.725806 | orchestrator | Saturday 05 April 2025 12:30:58 +0000 (0:00:01.194) 0:09:05.021 ******** 2025-04-05 12:33:27.725811 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.725816 | orchestrator | 2025-04-05 12:33:27.725821 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-04-05 12:33:27.725826 | orchestrator | Saturday 05 April 2025 12:30:59 +0000 (0:00:01.133) 0:09:06.154 ******** 2025-04-05 12:33:27.725831 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.725836 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.725841 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.725846 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.725854 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.725859 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.725864 | orchestrator | 2025-04-05 12:33:27.725869 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-04-05 12:33:27.725874 | orchestrator | Saturday 05 April 2025 12:31:00 +0000 (0:00:01.514) 0:09:07.669 ******** 2025-04-05 12:33:27.725879 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.725883 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.725888 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.725893 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.725898 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.725903 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.725907 | orchestrator | 2025-04-05 12:33:27.725912 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-04-05 12:33:27.725917 | orchestrator | Saturday 05 April 2025 12:31:05 +0000 (0:00:04.171) 0:09:11.840 ******** 2025-04-05 12:33:27.725922 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:33:27.725930 | orchestrator | 2025-04-05 12:33:27.725935 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-04-05 12:33:27.725940 | orchestrator | Saturday 05 April 2025 12:31:06 +0000 (0:00:01.061) 0:09:12.902 ******** 2025-04-05 12:33:27.725944 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.725949 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.725954 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.725959 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.725964 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.725968 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.725973 | orchestrator | 2025-04-05 12:33:27.725978 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-04-05 12:33:27.725983 | orchestrator | Saturday 05 April 2025 12:31:06 +0000 (0:00:00.554) 0:09:13.456 ******** 2025-04-05 12:33:27.725988 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.725993 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.725997 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.726002 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:33:27.726007 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:33:27.726024 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:33:27.726030 | orchestrator | 2025-04-05 12:33:27.726035 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-04-05 12:33:27.726040 | orchestrator | Saturday 05 April 2025 12:31:08 +0000 (0:00:02.229) 0:09:15.685 ******** 2025-04-05 12:33:27.726045 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.726050 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.726055 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.726060 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:33:27.726065 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:33:27.726070 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:33:27.726075 | orchestrator | 2025-04-05 12:33:27.726080 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-04-05 12:33:27.726085 | orchestrator | 2025-04-05 12:33:27.726089 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-05 12:33:27.726094 | orchestrator | Saturday 05 April 2025 12:31:11 +0000 (0:00:02.306) 0:09:17.991 ******** 2025-04-05 12:33:27.726099 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.726104 | orchestrator | 2025-04-05 12:33:27.726109 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-05 12:33:27.726114 | orchestrator | Saturday 05 April 2025 12:31:11 +0000 (0:00:00.457) 0:09:18.448 ******** 2025-04-05 12:33:27.726119 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726127 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726132 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726137 | orchestrator | 2025-04-05 12:33:27.726142 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-05 12:33:27.726147 | orchestrator | Saturday 05 April 2025 12:31:12 +0000 (0:00:00.519) 0:09:18.967 ******** 2025-04-05 12:33:27.726152 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.726157 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.726162 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.726167 | orchestrator | 2025-04-05 12:33:27.726172 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-05 12:33:27.726177 | orchestrator | Saturday 05 April 2025 12:31:12 +0000 (0:00:00.619) 0:09:19.587 ******** 2025-04-05 12:33:27.726182 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.726186 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.726191 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.726196 | orchestrator | 2025-04-05 12:33:27.726201 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-05 12:33:27.726206 | orchestrator | Saturday 05 April 2025 12:31:13 +0000 (0:00:00.627) 0:09:20.214 ******** 2025-04-05 12:33:27.726214 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.726219 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.726224 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.726229 | orchestrator | 2025-04-05 12:33:27.726234 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-05 12:33:27.726239 | orchestrator | Saturday 05 April 2025 12:31:14 +0000 (0:00:00.630) 0:09:20.845 ******** 2025-04-05 12:33:27.726243 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726248 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726253 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726258 | orchestrator | 2025-04-05 12:33:27.726263 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-05 12:33:27.726268 | orchestrator | Saturday 05 April 2025 12:31:14 +0000 (0:00:00.439) 0:09:21.285 ******** 2025-04-05 12:33:27.726273 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726277 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726282 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726287 | orchestrator | 2025-04-05 12:33:27.726295 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-05 12:33:27.726300 | orchestrator | Saturday 05 April 2025 12:31:14 +0000 (0:00:00.279) 0:09:21.564 ******** 2025-04-05 12:33:27.726305 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726309 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726317 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726322 | orchestrator | 2025-04-05 12:33:27.726327 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-05 12:33:27.726332 | orchestrator | Saturday 05 April 2025 12:31:15 +0000 (0:00:00.255) 0:09:21.819 ******** 2025-04-05 12:33:27.726336 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726341 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726346 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726351 | orchestrator | 2025-04-05 12:33:27.726356 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-05 12:33:27.726361 | orchestrator | Saturday 05 April 2025 12:31:15 +0000 (0:00:00.262) 0:09:22.082 ******** 2025-04-05 12:33:27.726366 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726370 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726375 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726380 | orchestrator | 2025-04-05 12:33:27.726385 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-05 12:33:27.726390 | orchestrator | Saturday 05 April 2025 12:31:15 +0000 (0:00:00.446) 0:09:22.528 ******** 2025-04-05 12:33:27.726394 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726399 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726404 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726409 | orchestrator | 2025-04-05 12:33:27.726414 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-05 12:33:27.726419 | orchestrator | Saturday 05 April 2025 12:31:16 +0000 (0:00:00.278) 0:09:22.807 ******** 2025-04-05 12:33:27.726424 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.726428 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.726433 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.726438 | orchestrator | 2025-04-05 12:33:27.726443 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-05 12:33:27.726448 | orchestrator | Saturday 05 April 2025 12:31:16 +0000 (0:00:00.601) 0:09:23.409 ******** 2025-04-05 12:33:27.726453 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726458 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726463 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726467 | orchestrator | 2025-04-05 12:33:27.726472 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-05 12:33:27.726477 | orchestrator | Saturday 05 April 2025 12:31:16 +0000 (0:00:00.269) 0:09:23.679 ******** 2025-04-05 12:33:27.726482 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726490 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726495 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726499 | orchestrator | 2025-04-05 12:33:27.726504 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-05 12:33:27.726509 | orchestrator | Saturday 05 April 2025 12:31:17 +0000 (0:00:00.419) 0:09:24.098 ******** 2025-04-05 12:33:27.726514 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.726519 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.726524 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.726529 | orchestrator | 2025-04-05 12:33:27.726534 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-05 12:33:27.726538 | orchestrator | Saturday 05 April 2025 12:31:17 +0000 (0:00:00.295) 0:09:24.394 ******** 2025-04-05 12:33:27.726543 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.726548 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.726553 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.726558 | orchestrator | 2025-04-05 12:33:27.726563 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-05 12:33:27.726568 | orchestrator | Saturday 05 April 2025 12:31:17 +0000 (0:00:00.304) 0:09:24.698 ******** 2025-04-05 12:33:27.726573 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.726578 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.726582 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.726587 | orchestrator | 2025-04-05 12:33:27.726592 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-05 12:33:27.726597 | orchestrator | Saturday 05 April 2025 12:31:18 +0000 (0:00:00.288) 0:09:24.986 ******** 2025-04-05 12:33:27.726602 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726607 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726612 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726620 | orchestrator | 2025-04-05 12:33:27.726625 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-05 12:33:27.726630 | orchestrator | Saturday 05 April 2025 12:31:18 +0000 (0:00:00.425) 0:09:25.412 ******** 2025-04-05 12:33:27.726635 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726640 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726645 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726649 | orchestrator | 2025-04-05 12:33:27.726654 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-05 12:33:27.726659 | orchestrator | Saturday 05 April 2025 12:31:18 +0000 (0:00:00.267) 0:09:25.680 ******** 2025-04-05 12:33:27.726664 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726669 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726674 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726679 | orchestrator | 2025-04-05 12:33:27.726684 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-05 12:33:27.726688 | orchestrator | Saturday 05 April 2025 12:31:19 +0000 (0:00:00.264) 0:09:25.945 ******** 2025-04-05 12:33:27.726693 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.726698 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.726703 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.726708 | orchestrator | 2025-04-05 12:33:27.726713 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-05 12:33:27.726718 | orchestrator | Saturday 05 April 2025 12:31:19 +0000 (0:00:00.291) 0:09:26.236 ******** 2025-04-05 12:33:27.726723 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726728 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726732 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726737 | orchestrator | 2025-04-05 12:33:27.726742 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-05 12:33:27.726773 | orchestrator | Saturday 05 April 2025 12:31:19 +0000 (0:00:00.429) 0:09:26.665 ******** 2025-04-05 12:33:27.726778 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726783 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726788 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726796 | orchestrator | 2025-04-05 12:33:27.726803 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-05 12:33:27.726808 | orchestrator | Saturday 05 April 2025 12:31:20 +0000 (0:00:00.306) 0:09:26.972 ******** 2025-04-05 12:33:27.726813 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726818 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726823 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726827 | orchestrator | 2025-04-05 12:33:27.726832 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-05 12:33:27.726837 | orchestrator | Saturday 05 April 2025 12:31:20 +0000 (0:00:00.305) 0:09:27.278 ******** 2025-04-05 12:33:27.726842 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726847 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726852 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726857 | orchestrator | 2025-04-05 12:33:27.726861 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-05 12:33:27.726866 | orchestrator | Saturday 05 April 2025 12:31:20 +0000 (0:00:00.304) 0:09:27.583 ******** 2025-04-05 12:33:27.726871 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726876 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726881 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726886 | orchestrator | 2025-04-05 12:33:27.726891 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-05 12:33:27.726896 | orchestrator | Saturday 05 April 2025 12:31:21 +0000 (0:00:00.424) 0:09:28.008 ******** 2025-04-05 12:33:27.726901 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726906 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726911 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726916 | orchestrator | 2025-04-05 12:33:27.726920 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-05 12:33:27.726925 | orchestrator | Saturday 05 April 2025 12:31:21 +0000 (0:00:00.246) 0:09:28.254 ******** 2025-04-05 12:33:27.726930 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726935 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726940 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726945 | orchestrator | 2025-04-05 12:33:27.726950 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-05 12:33:27.726955 | orchestrator | Saturday 05 April 2025 12:31:21 +0000 (0:00:00.295) 0:09:28.549 ******** 2025-04-05 12:33:27.726960 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726965 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.726970 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.726975 | orchestrator | 2025-04-05 12:33:27.726980 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-05 12:33:27.726985 | orchestrator | Saturday 05 April 2025 12:31:22 +0000 (0:00:00.286) 0:09:28.835 ******** 2025-04-05 12:33:27.726990 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.726995 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727000 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727004 | orchestrator | 2025-04-05 12:33:27.727009 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-05 12:33:27.727014 | orchestrator | Saturday 05 April 2025 12:31:22 +0000 (0:00:00.523) 0:09:29.359 ******** 2025-04-05 12:33:27.727019 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727024 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727029 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727034 | orchestrator | 2025-04-05 12:33:27.727039 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-05 12:33:27.727044 | orchestrator | Saturday 05 April 2025 12:31:22 +0000 (0:00:00.277) 0:09:29.637 ******** 2025-04-05 12:33:27.727049 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727054 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727061 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727066 | orchestrator | 2025-04-05 12:33:27.727071 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-05 12:33:27.727076 | orchestrator | Saturday 05 April 2025 12:31:23 +0000 (0:00:00.302) 0:09:29.940 ******** 2025-04-05 12:33:27.727081 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727086 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727091 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727096 | orchestrator | 2025-04-05 12:33:27.727100 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-05 12:33:27.727105 | orchestrator | Saturday 05 April 2025 12:31:23 +0000 (0:00:00.246) 0:09:30.186 ******** 2025-04-05 12:33:27.727110 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-05 12:33:27.727115 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-05 12:33:27.727120 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727125 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-05 12:33:27.727130 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-05 12:33:27.727135 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727140 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-05 12:33:27.727144 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-05 12:33:27.727149 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727154 | orchestrator | 2025-04-05 12:33:27.727159 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-05 12:33:27.727164 | orchestrator | Saturday 05 April 2025 12:31:23 +0000 (0:00:00.465) 0:09:30.651 ******** 2025-04-05 12:33:27.727169 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-05 12:33:27.727174 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-05 12:33:27.727179 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-05 12:33:27.727184 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-05 12:33:27.727188 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727193 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727198 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-05 12:33:27.727203 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-05 12:33:27.727210 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727217 | orchestrator | 2025-04-05 12:33:27.727222 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-05 12:33:27.727227 | orchestrator | Saturday 05 April 2025 12:31:24 +0000 (0:00:00.323) 0:09:30.974 ******** 2025-04-05 12:33:27.727232 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727237 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727242 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727247 | orchestrator | 2025-04-05 12:33:27.727252 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-05 12:33:27.727256 | orchestrator | Saturday 05 April 2025 12:31:24 +0000 (0:00:00.330) 0:09:31.305 ******** 2025-04-05 12:33:27.727261 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727266 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727271 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727276 | orchestrator | 2025-04-05 12:33:27.727281 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-05 12:33:27.727286 | orchestrator | Saturday 05 April 2025 12:31:24 +0000 (0:00:00.310) 0:09:31.616 ******** 2025-04-05 12:33:27.727291 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727295 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727300 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727305 | orchestrator | 2025-04-05 12:33:27.727310 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-05 12:33:27.727315 | orchestrator | Saturday 05 April 2025 12:31:25 +0000 (0:00:00.562) 0:09:32.178 ******** 2025-04-05 12:33:27.727322 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727328 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727332 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727337 | orchestrator | 2025-04-05 12:33:27.727342 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-05 12:33:27.727347 | orchestrator | Saturday 05 April 2025 12:31:25 +0000 (0:00:00.344) 0:09:32.523 ******** 2025-04-05 12:33:27.727352 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727357 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727362 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727367 | orchestrator | 2025-04-05 12:33:27.727372 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-05 12:33:27.727376 | orchestrator | Saturday 05 April 2025 12:31:26 +0000 (0:00:00.364) 0:09:32.888 ******** 2025-04-05 12:33:27.727381 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727386 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727391 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727396 | orchestrator | 2025-04-05 12:33:27.727403 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-05 12:33:27.727408 | orchestrator | Saturday 05 April 2025 12:31:26 +0000 (0:00:00.341) 0:09:33.229 ******** 2025-04-05 12:33:27.727413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.727418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.727423 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.727428 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727433 | orchestrator | 2025-04-05 12:33:27.727438 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-05 12:33:27.727442 | orchestrator | Saturday 05 April 2025 12:31:27 +0000 (0:00:00.986) 0:09:34.216 ******** 2025-04-05 12:33:27.727447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.727452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.727457 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.727462 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727467 | orchestrator | 2025-04-05 12:33:27.727472 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-05 12:33:27.727477 | orchestrator | Saturday 05 April 2025 12:31:27 +0000 (0:00:00.433) 0:09:34.649 ******** 2025-04-05 12:33:27.727482 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.727487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.727492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.727496 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727501 | orchestrator | 2025-04-05 12:33:27.727506 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:33:27.727511 | orchestrator | Saturday 05 April 2025 12:31:28 +0000 (0:00:00.473) 0:09:35.123 ******** 2025-04-05 12:33:27.727516 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727521 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727526 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727531 | orchestrator | 2025-04-05 12:33:27.727536 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-05 12:33:27.727541 | orchestrator | Saturday 05 April 2025 12:31:28 +0000 (0:00:00.324) 0:09:35.447 ******** 2025-04-05 12:33:27.727546 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-05 12:33:27.727551 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727555 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-05 12:33:27.727560 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727565 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-05 12:33:27.727570 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727575 | orchestrator | 2025-04-05 12:33:27.727583 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-05 12:33:27.727588 | orchestrator | Saturday 05 April 2025 12:31:29 +0000 (0:00:00.395) 0:09:35.843 ******** 2025-04-05 12:33:27.727593 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727598 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727602 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727607 | orchestrator | 2025-04-05 12:33:27.727612 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:33:27.727617 | orchestrator | Saturday 05 April 2025 12:31:29 +0000 (0:00:00.456) 0:09:36.299 ******** 2025-04-05 12:33:27.727624 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727629 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727634 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727639 | orchestrator | 2025-04-05 12:33:27.727643 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-05 12:33:27.727648 | orchestrator | Saturday 05 April 2025 12:31:29 +0000 (0:00:00.287) 0:09:36.587 ******** 2025-04-05 12:33:27.727653 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-05 12:33:27.727658 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727663 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-05 12:33:27.727668 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727673 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-05 12:33:27.727678 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727683 | orchestrator | 2025-04-05 12:33:27.727687 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-05 12:33:27.727692 | orchestrator | Saturday 05 April 2025 12:31:30 +0000 (0:00:00.430) 0:09:37.017 ******** 2025-04-05 12:33:27.727697 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.727702 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727707 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.727712 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727717 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.727722 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727727 | orchestrator | 2025-04-05 12:33:27.727732 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-05 12:33:27.727737 | orchestrator | Saturday 05 April 2025 12:31:30 +0000 (0:00:00.295) 0:09:37.313 ******** 2025-04-05 12:33:27.727742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.727759 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.727764 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.727769 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-05 12:33:27.727774 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-05 12:33:27.727779 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-05 12:33:27.727784 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727789 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727794 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-05 12:33:27.727798 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-05 12:33:27.727803 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-05 12:33:27.727808 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727813 | orchestrator | 2025-04-05 12:33:27.727818 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-05 12:33:27.727823 | orchestrator | Saturday 05 April 2025 12:31:31 +0000 (0:00:00.669) 0:09:37.983 ******** 2025-04-05 12:33:27.727828 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727837 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727842 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727847 | orchestrator | 2025-04-05 12:33:27.727852 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-05 12:33:27.727857 | orchestrator | Saturday 05 April 2025 12:31:31 +0000 (0:00:00.492) 0:09:38.476 ******** 2025-04-05 12:33:27.727862 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-05 12:33:27.727867 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727872 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-05 12:33:27.727877 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727882 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-05 12:33:27.727887 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727892 | orchestrator | 2025-04-05 12:33:27.727896 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-05 12:33:27.727902 | orchestrator | Saturday 05 April 2025 12:31:32 +0000 (0:00:00.639) 0:09:39.115 ******** 2025-04-05 12:33:27.727906 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727911 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727916 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727924 | orchestrator | 2025-04-05 12:33:27.727929 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-05 12:33:27.727934 | orchestrator | Saturday 05 April 2025 12:31:32 +0000 (0:00:00.468) 0:09:39.583 ******** 2025-04-05 12:33:27.727939 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.727944 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727949 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727954 | orchestrator | 2025-04-05 12:33:27.727959 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-04-05 12:33:27.727964 | orchestrator | Saturday 05 April 2025 12:31:33 +0000 (0:00:00.627) 0:09:40.211 ******** 2025-04-05 12:33:27.727968 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.727973 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.727978 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-04-05 12:33:27.727983 | orchestrator | 2025-04-05 12:33:27.727988 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-04-05 12:33:27.727993 | orchestrator | Saturday 05 April 2025 12:31:33 +0000 (0:00:00.393) 0:09:40.605 ******** 2025-04-05 12:33:27.727998 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-05 12:33:27.728006 | orchestrator | 2025-04-05 12:33:27.728011 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-04-05 12:33:27.728016 | orchestrator | Saturday 05 April 2025 12:31:35 +0000 (0:00:01.585) 0:09:42.190 ******** 2025-04-05 12:33:27.728023 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-04-05 12:33:27.728030 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.728037 | orchestrator | 2025-04-05 12:33:27.728042 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-04-05 12:33:27.728047 | orchestrator | Saturday 05 April 2025 12:31:35 +0000 (0:00:00.459) 0:09:42.650 ******** 2025-04-05 12:33:27.728052 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-05 12:33:27.728058 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-05 12:33:27.728067 | orchestrator | 2025-04-05 12:33:27.728072 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-04-05 12:33:27.728077 | orchestrator | Saturday 05 April 2025 12:31:41 +0000 (0:00:05.594) 0:09:48.244 ******** 2025-04-05 12:33:27.728082 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-05 12:33:27.728086 | orchestrator | 2025-04-05 12:33:27.728091 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-04-05 12:33:27.728096 | orchestrator | Saturday 05 April 2025 12:31:44 +0000 (0:00:02.961) 0:09:51.205 ******** 2025-04-05 12:33:27.728101 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.728106 | orchestrator | 2025-04-05 12:33:27.728111 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-04-05 12:33:27.728116 | orchestrator | Saturday 05 April 2025 12:31:45 +0000 (0:00:00.544) 0:09:51.749 ******** 2025-04-05 12:33:27.728121 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-04-05 12:33:27.728125 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-04-05 12:33:27.728130 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-04-05 12:33:27.728135 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-04-05 12:33:27.728140 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-04-05 12:33:27.728145 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-04-05 12:33:27.728149 | orchestrator | 2025-04-05 12:33:27.728154 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-04-05 12:33:27.728159 | orchestrator | Saturday 05 April 2025 12:31:46 +0000 (0:00:01.014) 0:09:52.764 ******** 2025-04-05 12:33:27.728164 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:33:27.728169 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-05 12:33:27.728174 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-05 12:33:27.728179 | orchestrator | 2025-04-05 12:33:27.728184 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-04-05 12:33:27.728189 | orchestrator | Saturday 05 April 2025 12:31:48 +0000 (0:00:02.019) 0:09:54.784 ******** 2025-04-05 12:33:27.728194 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-05 12:33:27.728201 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-05 12:33:27.728206 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.728211 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-05 12:33:27.728216 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-05 12:33:27.728221 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.728226 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-05 12:33:27.728231 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-05 12:33:27.728236 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.728240 | orchestrator | 2025-04-05 12:33:27.728245 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-04-05 12:33:27.728250 | orchestrator | Saturday 05 April 2025 12:31:49 +0000 (0:00:01.075) 0:09:55.859 ******** 2025-04-05 12:33:27.728255 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.728260 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.728265 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.728270 | orchestrator | 2025-04-05 12:33:27.728275 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-04-05 12:33:27.728280 | orchestrator | Saturday 05 April 2025 12:31:49 +0000 (0:00:00.320) 0:09:56.179 ******** 2025-04-05 12:33:27.728285 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.728290 | orchestrator | 2025-04-05 12:33:27.728295 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-04-05 12:33:27.728303 | orchestrator | Saturday 05 April 2025 12:31:50 +0000 (0:00:00.739) 0:09:56.919 ******** 2025-04-05 12:33:27.728308 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.728313 | orchestrator | 2025-04-05 12:33:27.728318 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-04-05 12:33:27.728323 | orchestrator | Saturday 05 April 2025 12:31:50 +0000 (0:00:00.547) 0:09:57.466 ******** 2025-04-05 12:33:27.728329 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.728334 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.728339 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.728344 | orchestrator | 2025-04-05 12:33:27.728349 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-04-05 12:33:27.728354 | orchestrator | Saturday 05 April 2025 12:31:51 +0000 (0:00:01.198) 0:09:58.665 ******** 2025-04-05 12:33:27.728359 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.728364 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.728369 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.728374 | orchestrator | 2025-04-05 12:33:27.728378 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-04-05 12:33:27.728383 | orchestrator | Saturday 05 April 2025 12:31:53 +0000 (0:00:01.078) 0:09:59.743 ******** 2025-04-05 12:33:27.728388 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.728393 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.728398 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.728403 | orchestrator | 2025-04-05 12:33:27.728408 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-04-05 12:33:27.728413 | orchestrator | Saturday 05 April 2025 12:31:54 +0000 (0:00:01.593) 0:10:01.337 ******** 2025-04-05 12:33:27.728418 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.728423 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.728428 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.728432 | orchestrator | 2025-04-05 12:33:27.728443 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-04-05 12:33:27.728448 | orchestrator | Saturday 05 April 2025 12:31:56 +0000 (0:00:01.889) 0:10:03.226 ******** 2025-04-05 12:33:27.728453 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-04-05 12:33:27.728458 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-04-05 12:33:27.728463 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-04-05 12:33:27.728468 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.728473 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.728477 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.728482 | orchestrator | 2025-04-05 12:33:27.728487 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-05 12:33:27.728492 | orchestrator | Saturday 05 April 2025 12:32:13 +0000 (0:00:16.896) 0:10:20.123 ******** 2025-04-05 12:33:27.728497 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.728502 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.728507 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.728511 | orchestrator | 2025-04-05 12:33:27.728516 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-04-05 12:33:27.728521 | orchestrator | Saturday 05 April 2025 12:32:14 +0000 (0:00:00.874) 0:10:20.997 ******** 2025-04-05 12:33:27.728526 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.728531 | orchestrator | 2025-04-05 12:33:27.728536 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-04-05 12:33:27.728541 | orchestrator | Saturday 05 April 2025 12:32:14 +0000 (0:00:00.542) 0:10:21.540 ******** 2025-04-05 12:33:27.728545 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.728550 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.728555 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.728563 | orchestrator | 2025-04-05 12:33:27.728568 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-04-05 12:33:27.728573 | orchestrator | Saturday 05 April 2025 12:32:15 +0000 (0:00:00.326) 0:10:21.867 ******** 2025-04-05 12:33:27.728577 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.728582 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.728587 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.728592 | orchestrator | 2025-04-05 12:33:27.728597 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-04-05 12:33:27.728602 | orchestrator | Saturday 05 April 2025 12:32:16 +0000 (0:00:01.173) 0:10:23.040 ******** 2025-04-05 12:33:27.728607 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.728612 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.728616 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.728621 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.728626 | orchestrator | 2025-04-05 12:33:27.728631 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-04-05 12:33:27.728636 | orchestrator | Saturday 05 April 2025 12:32:16 +0000 (0:00:00.561) 0:10:23.602 ******** 2025-04-05 12:33:27.728641 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.728646 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.728651 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.728656 | orchestrator | 2025-04-05 12:33:27.728661 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-05 12:33:27.728666 | orchestrator | Saturday 05 April 2025 12:32:17 +0000 (0:00:00.275) 0:10:23.878 ******** 2025-04-05 12:33:27.728671 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.728675 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.728680 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.728685 | orchestrator | 2025-04-05 12:33:27.728690 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-04-05 12:33:27.728695 | orchestrator | 2025-04-05 12:33:27.728700 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-05 12:33:27.728705 | orchestrator | Saturday 05 April 2025 12:32:18 +0000 (0:00:01.751) 0:10:25.629 ******** 2025-04-05 12:33:27.728710 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.728714 | orchestrator | 2025-04-05 12:33:27.728719 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-05 12:33:27.728724 | orchestrator | Saturday 05 April 2025 12:32:19 +0000 (0:00:00.602) 0:10:26.232 ******** 2025-04-05 12:33:27.728729 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.728736 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.728741 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.728768 | orchestrator | 2025-04-05 12:33:27.728773 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-05 12:33:27.728778 | orchestrator | Saturday 05 April 2025 12:32:19 +0000 (0:00:00.268) 0:10:26.500 ******** 2025-04-05 12:33:27.728783 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.728788 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.728793 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.728798 | orchestrator | 2025-04-05 12:33:27.728803 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-05 12:33:27.728808 | orchestrator | Saturday 05 April 2025 12:32:20 +0000 (0:00:00.622) 0:10:27.122 ******** 2025-04-05 12:33:27.728813 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.728818 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.728823 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.728831 | orchestrator | 2025-04-05 12:33:27.728836 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-05 12:33:27.728841 | orchestrator | Saturday 05 April 2025 12:32:21 +0000 (0:00:00.749) 0:10:27.872 ******** 2025-04-05 12:33:27.728849 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.728854 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.728859 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.728864 | orchestrator | 2025-04-05 12:33:27.728869 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-05 12:33:27.728877 | orchestrator | Saturday 05 April 2025 12:32:21 +0000 (0:00:00.604) 0:10:28.476 ******** 2025-04-05 12:33:27.728882 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.728887 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.728892 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.728897 | orchestrator | 2025-04-05 12:33:27.728901 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-05 12:33:27.728906 | orchestrator | Saturday 05 April 2025 12:32:22 +0000 (0:00:00.278) 0:10:28.754 ******** 2025-04-05 12:33:27.728911 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.728916 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.728921 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.728926 | orchestrator | 2025-04-05 12:33:27.728931 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-05 12:33:27.728936 | orchestrator | Saturday 05 April 2025 12:32:22 +0000 (0:00:00.417) 0:10:29.171 ******** 2025-04-05 12:33:27.728941 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.728946 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.728951 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.728955 | orchestrator | 2025-04-05 12:33:27.728960 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-05 12:33:27.728965 | orchestrator | Saturday 05 April 2025 12:32:22 +0000 (0:00:00.289) 0:10:29.461 ******** 2025-04-05 12:33:27.728970 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.728975 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.728980 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.728984 | orchestrator | 2025-04-05 12:33:27.728989 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-05 12:33:27.728994 | orchestrator | Saturday 05 April 2025 12:32:23 +0000 (0:00:00.278) 0:10:29.740 ******** 2025-04-05 12:33:27.728999 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729004 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729009 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729013 | orchestrator | 2025-04-05 12:33:27.729018 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-05 12:33:27.729023 | orchestrator | Saturday 05 April 2025 12:32:23 +0000 (0:00:00.285) 0:10:30.025 ******** 2025-04-05 12:33:27.729028 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729033 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729038 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729043 | orchestrator | 2025-04-05 12:33:27.729047 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-05 12:33:27.729052 | orchestrator | Saturday 05 April 2025 12:32:23 +0000 (0:00:00.422) 0:10:30.447 ******** 2025-04-05 12:33:27.729057 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.729062 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.729067 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.729072 | orchestrator | 2025-04-05 12:33:27.729077 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-05 12:33:27.729082 | orchestrator | Saturday 05 April 2025 12:32:24 +0000 (0:00:00.708) 0:10:31.155 ******** 2025-04-05 12:33:27.729087 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729092 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729097 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729102 | orchestrator | 2025-04-05 12:33:27.729107 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-05 12:33:27.729112 | orchestrator | Saturday 05 April 2025 12:32:24 +0000 (0:00:00.307) 0:10:31.463 ******** 2025-04-05 12:33:27.729117 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729124 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729129 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729134 | orchestrator | 2025-04-05 12:33:27.729139 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-05 12:33:27.729144 | orchestrator | Saturday 05 April 2025 12:32:25 +0000 (0:00:00.287) 0:10:31.750 ******** 2025-04-05 12:33:27.729149 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.729154 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.729159 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.729164 | orchestrator | 2025-04-05 12:33:27.729169 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-05 12:33:27.729174 | orchestrator | Saturday 05 April 2025 12:32:25 +0000 (0:00:00.451) 0:10:32.202 ******** 2025-04-05 12:33:27.729178 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.729183 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.729188 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.729193 | orchestrator | 2025-04-05 12:33:27.729198 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-05 12:33:27.729203 | orchestrator | Saturday 05 April 2025 12:32:25 +0000 (0:00:00.304) 0:10:32.506 ******** 2025-04-05 12:33:27.729208 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.729213 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.729217 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.729222 | orchestrator | 2025-04-05 12:33:27.729229 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-05 12:33:27.729234 | orchestrator | Saturday 05 April 2025 12:32:26 +0000 (0:00:00.293) 0:10:32.799 ******** 2025-04-05 12:33:27.729239 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729244 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729249 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729254 | orchestrator | 2025-04-05 12:33:27.729259 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-05 12:33:27.729264 | orchestrator | Saturday 05 April 2025 12:32:26 +0000 (0:00:00.287) 0:10:33.087 ******** 2025-04-05 12:33:27.729269 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729274 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729278 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729283 | orchestrator | 2025-04-05 12:33:27.729288 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-05 12:33:27.729293 | orchestrator | Saturday 05 April 2025 12:32:26 +0000 (0:00:00.490) 0:10:33.578 ******** 2025-04-05 12:33:27.729298 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729303 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729308 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729313 | orchestrator | 2025-04-05 12:33:27.729318 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-05 12:33:27.729322 | orchestrator | Saturday 05 April 2025 12:32:27 +0000 (0:00:00.279) 0:10:33.858 ******** 2025-04-05 12:33:27.729327 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.729332 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.729340 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.729345 | orchestrator | 2025-04-05 12:33:27.729350 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-05 12:33:27.729357 | orchestrator | Saturday 05 April 2025 12:32:27 +0000 (0:00:00.301) 0:10:34.159 ******** 2025-04-05 12:33:27.729362 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729367 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729371 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729376 | orchestrator | 2025-04-05 12:33:27.729381 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-05 12:33:27.729386 | orchestrator | Saturday 05 April 2025 12:32:27 +0000 (0:00:00.298) 0:10:34.457 ******** 2025-04-05 12:33:27.729391 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729396 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729401 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729408 | orchestrator | 2025-04-05 12:33:27.729413 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-05 12:33:27.729418 | orchestrator | Saturday 05 April 2025 12:32:28 +0000 (0:00:00.487) 0:10:34.945 ******** 2025-04-05 12:33:27.729423 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729428 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729432 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729437 | orchestrator | 2025-04-05 12:33:27.729442 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-05 12:33:27.729447 | orchestrator | Saturday 05 April 2025 12:32:28 +0000 (0:00:00.327) 0:10:35.273 ******** 2025-04-05 12:33:27.729452 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729456 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729461 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729466 | orchestrator | 2025-04-05 12:33:27.729471 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-05 12:33:27.729476 | orchestrator | Saturday 05 April 2025 12:32:28 +0000 (0:00:00.307) 0:10:35.580 ******** 2025-04-05 12:33:27.729481 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729486 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729491 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729495 | orchestrator | 2025-04-05 12:33:27.729500 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-05 12:33:27.729505 | orchestrator | Saturday 05 April 2025 12:32:29 +0000 (0:00:00.336) 0:10:35.917 ******** 2025-04-05 12:33:27.729510 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729515 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729520 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729525 | orchestrator | 2025-04-05 12:33:27.729530 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-05 12:33:27.729535 | orchestrator | Saturday 05 April 2025 12:32:29 +0000 (0:00:00.533) 0:10:36.451 ******** 2025-04-05 12:33:27.729539 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729544 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729549 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729554 | orchestrator | 2025-04-05 12:33:27.729559 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-05 12:33:27.729564 | orchestrator | Saturday 05 April 2025 12:32:30 +0000 (0:00:00.327) 0:10:36.779 ******** 2025-04-05 12:33:27.729569 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729574 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729579 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729584 | orchestrator | 2025-04-05 12:33:27.729589 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-05 12:33:27.729593 | orchestrator | Saturday 05 April 2025 12:32:30 +0000 (0:00:00.365) 0:10:37.145 ******** 2025-04-05 12:33:27.729598 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729603 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729608 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729613 | orchestrator | 2025-04-05 12:33:27.729618 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-05 12:33:27.729623 | orchestrator | Saturday 05 April 2025 12:32:30 +0000 (0:00:00.335) 0:10:37.481 ******** 2025-04-05 12:33:27.729628 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729633 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729638 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729643 | orchestrator | 2025-04-05 12:33:27.729647 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-05 12:33:27.729652 | orchestrator | Saturday 05 April 2025 12:32:31 +0000 (0:00:00.522) 0:10:38.003 ******** 2025-04-05 12:33:27.729659 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729664 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729672 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729677 | orchestrator | 2025-04-05 12:33:27.729682 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-05 12:33:27.729687 | orchestrator | Saturday 05 April 2025 12:32:31 +0000 (0:00:00.340) 0:10:38.344 ******** 2025-04-05 12:33:27.729692 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729696 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729701 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729706 | orchestrator | 2025-04-05 12:33:27.729711 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-05 12:33:27.729716 | orchestrator | Saturday 05 April 2025 12:32:31 +0000 (0:00:00.337) 0:10:38.682 ******** 2025-04-05 12:33:27.729721 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-05 12:33:27.729726 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-05 12:33:27.729731 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729736 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-05 12:33:27.729741 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-05 12:33:27.729758 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729763 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-05 12:33:27.729768 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-05 12:33:27.729773 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729778 | orchestrator | 2025-04-05 12:33:27.729783 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-05 12:33:27.729787 | orchestrator | Saturday 05 April 2025 12:32:32 +0000 (0:00:00.369) 0:10:39.051 ******** 2025-04-05 12:33:27.729792 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-05 12:33:27.729797 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-05 12:33:27.729802 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729807 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-05 12:33:27.729812 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-05 12:33:27.729817 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729822 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-05 12:33:27.729826 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-05 12:33:27.729831 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729836 | orchestrator | 2025-04-05 12:33:27.729841 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-05 12:33:27.729846 | orchestrator | Saturday 05 April 2025 12:32:32 +0000 (0:00:00.353) 0:10:39.404 ******** 2025-04-05 12:33:27.729851 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729856 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729861 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729865 | orchestrator | 2025-04-05 12:33:27.729870 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-05 12:33:27.729875 | orchestrator | Saturday 05 April 2025 12:32:33 +0000 (0:00:00.600) 0:10:40.005 ******** 2025-04-05 12:33:27.729880 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729885 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729890 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729894 | orchestrator | 2025-04-05 12:33:27.729899 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-05 12:33:27.729904 | orchestrator | Saturday 05 April 2025 12:32:33 +0000 (0:00:00.330) 0:10:40.335 ******** 2025-04-05 12:33:27.729909 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729914 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729921 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729926 | orchestrator | 2025-04-05 12:33:27.729931 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-05 12:33:27.729936 | orchestrator | Saturday 05 April 2025 12:32:33 +0000 (0:00:00.323) 0:10:40.658 ******** 2025-04-05 12:33:27.729944 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729949 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729953 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729958 | orchestrator | 2025-04-05 12:33:27.729963 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-05 12:33:27.729968 | orchestrator | Saturday 05 April 2025 12:32:34 +0000 (0:00:00.313) 0:10:40.972 ******** 2025-04-05 12:33:27.729973 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.729978 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.729983 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.729988 | orchestrator | 2025-04-05 12:33:27.729995 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-05 12:33:27.730000 | orchestrator | Saturday 05 April 2025 12:32:34 +0000 (0:00:00.618) 0:10:41.590 ******** 2025-04-05 12:33:27.730005 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730010 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.730034 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.730040 | orchestrator | 2025-04-05 12:33:27.730045 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-05 12:33:27.730050 | orchestrator | Saturday 05 April 2025 12:32:35 +0000 (0:00:00.339) 0:10:41.930 ******** 2025-04-05 12:33:27.730054 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.730059 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.730064 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.730069 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730074 | orchestrator | 2025-04-05 12:33:27.730079 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-05 12:33:27.730084 | orchestrator | Saturday 05 April 2025 12:32:35 +0000 (0:00:00.439) 0:10:42.369 ******** 2025-04-05 12:33:27.730089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.730094 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.730101 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.730106 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730111 | orchestrator | 2025-04-05 12:33:27.730116 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-05 12:33:27.730121 | orchestrator | Saturday 05 April 2025 12:32:36 +0000 (0:00:00.401) 0:10:42.771 ******** 2025-04-05 12:33:27.730126 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.730131 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.730136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.730141 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730145 | orchestrator | 2025-04-05 12:33:27.730150 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:33:27.730155 | orchestrator | Saturday 05 April 2025 12:32:36 +0000 (0:00:00.431) 0:10:43.203 ******** 2025-04-05 12:33:27.730160 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730165 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.730170 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.730175 | orchestrator | 2025-04-05 12:33:27.730180 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-05 12:33:27.730185 | orchestrator | Saturday 05 April 2025 12:32:37 +0000 (0:00:00.693) 0:10:43.896 ******** 2025-04-05 12:33:27.730190 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-05 12:33:27.730195 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730200 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-05 12:33:27.730204 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.730209 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-05 12:33:27.730214 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.730219 | orchestrator | 2025-04-05 12:33:27.730227 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-05 12:33:27.730232 | orchestrator | Saturday 05 April 2025 12:32:37 +0000 (0:00:00.483) 0:10:44.380 ******** 2025-04-05 12:33:27.730237 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730242 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.730247 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.730251 | orchestrator | 2025-04-05 12:33:27.730256 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:33:27.730261 | orchestrator | Saturday 05 April 2025 12:32:38 +0000 (0:00:00.327) 0:10:44.708 ******** 2025-04-05 12:33:27.730266 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730271 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.730276 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.730280 | orchestrator | 2025-04-05 12:33:27.730285 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-05 12:33:27.730290 | orchestrator | Saturday 05 April 2025 12:32:38 +0000 (0:00:00.341) 0:10:45.050 ******** 2025-04-05 12:33:27.730295 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-05 12:33:27.730300 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730304 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-05 12:33:27.730309 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.730314 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-05 12:33:27.730319 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.730324 | orchestrator | 2025-04-05 12:33:27.730328 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-05 12:33:27.730333 | orchestrator | Saturday 05 April 2025 12:32:39 +0000 (0:00:00.996) 0:10:46.046 ******** 2025-04-05 12:33:27.730338 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.730343 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730348 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.730353 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.730358 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-05 12:33:27.730363 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.730368 | orchestrator | 2025-04-05 12:33:27.730373 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-05 12:33:27.730378 | orchestrator | Saturday 05 April 2025 12:32:39 +0000 (0:00:00.369) 0:10:46.415 ******** 2025-04-05 12:33:27.730382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.730387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.730392 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.730397 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730402 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-05 12:33:27.730407 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-05 12:33:27.730412 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-05 12:33:27.730417 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.730422 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-05 12:33:27.730426 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-05 12:33:27.730431 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-05 12:33:27.730436 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.730441 | orchestrator | 2025-04-05 12:33:27.730446 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-05 12:33:27.730451 | orchestrator | Saturday 05 April 2025 12:32:40 +0000 (0:00:00.676) 0:10:47.092 ******** 2025-04-05 12:33:27.730456 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730465 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.730470 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.730475 | orchestrator | 2025-04-05 12:33:27.730480 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-05 12:33:27.730486 | orchestrator | Saturday 05 April 2025 12:32:41 +0000 (0:00:00.788) 0:10:47.880 ******** 2025-04-05 12:33:27.730491 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-05 12:33:27.730496 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730501 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-05 12:33:27.730506 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.730511 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-05 12:33:27.730516 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.730521 | orchestrator | 2025-04-05 12:33:27.730525 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-05 12:33:27.730530 | orchestrator | Saturday 05 April 2025 12:32:41 +0000 (0:00:00.571) 0:10:48.451 ******** 2025-04-05 12:33:27.730535 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730540 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.730545 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.730550 | orchestrator | 2025-04-05 12:33:27.730555 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-05 12:33:27.730560 | orchestrator | Saturday 05 April 2025 12:32:42 +0000 (0:00:00.786) 0:10:49.238 ******** 2025-04-05 12:33:27.730565 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730570 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.730574 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.730579 | orchestrator | 2025-04-05 12:33:27.730584 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-04-05 12:33:27.730589 | orchestrator | Saturday 05 April 2025 12:32:43 +0000 (0:00:00.550) 0:10:49.789 ******** 2025-04-05 12:33:27.730594 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.730599 | orchestrator | 2025-04-05 12:33:27.730606 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-04-05 12:33:27.730611 | orchestrator | Saturday 05 April 2025 12:32:43 +0000 (0:00:00.752) 0:10:50.541 ******** 2025-04-05 12:33:27.730616 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-04-05 12:33:27.730621 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-04-05 12:33:27.730626 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-04-05 12:33:27.730631 | orchestrator | 2025-04-05 12:33:27.730636 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-04-05 12:33:27.730641 | orchestrator | Saturday 05 April 2025 12:32:44 +0000 (0:00:00.706) 0:10:51.247 ******** 2025-04-05 12:33:27.730646 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:33:27.730651 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-05 12:33:27.730656 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-05 12:33:27.730660 | orchestrator | 2025-04-05 12:33:27.730665 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-04-05 12:33:27.730670 | orchestrator | Saturday 05 April 2025 12:32:46 +0000 (0:00:01.669) 0:10:52.917 ******** 2025-04-05 12:33:27.730675 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-05 12:33:27.730680 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-05 12:33:27.730685 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-05 12:33:27.730690 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-05 12:33:27.730694 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.730699 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.730704 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-05 12:33:27.730709 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-05 12:33:27.730717 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.730722 | orchestrator | 2025-04-05 12:33:27.730726 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-04-05 12:33:27.730731 | orchestrator | Saturday 05 April 2025 12:32:47 +0000 (0:00:01.227) 0:10:54.144 ******** 2025-04-05 12:33:27.730736 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730741 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.730761 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.730766 | orchestrator | 2025-04-05 12:33:27.730771 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-04-05 12:33:27.730776 | orchestrator | Saturday 05 April 2025 12:32:47 +0000 (0:00:00.332) 0:10:54.477 ******** 2025-04-05 12:33:27.730781 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730786 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.730791 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.730796 | orchestrator | 2025-04-05 12:33:27.730800 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-04-05 12:33:27.730805 | orchestrator | Saturday 05 April 2025 12:32:48 +0000 (0:00:00.332) 0:10:54.810 ******** 2025-04-05 12:33:27.730810 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-04-05 12:33:27.730815 | orchestrator | 2025-04-05 12:33:27.730820 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-04-05 12:33:27.730825 | orchestrator | Saturday 05 April 2025 12:32:48 +0000 (0:00:00.243) 0:10:55.054 ******** 2025-04-05 12:33:27.730830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-05 12:33:27.730835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-05 12:33:27.730840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-05 12:33:27.730845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-05 12:33:27.730852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-05 12:33:27.730857 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730862 | orchestrator | 2025-04-05 12:33:27.730867 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-04-05 12:33:27.730872 | orchestrator | Saturday 05 April 2025 12:32:49 +0000 (0:00:01.072) 0:10:56.127 ******** 2025-04-05 12:33:27.730877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-05 12:33:27.730884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-05 12:33:27.730889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-05 12:33:27.730894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-05 12:33:27.730899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-05 12:33:27.730904 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730909 | orchestrator | 2025-04-05 12:33:27.730914 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-04-05 12:33:27.730919 | orchestrator | Saturday 05 April 2025 12:32:50 +0000 (0:00:00.580) 0:10:56.708 ******** 2025-04-05 12:33:27.730924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-05 12:33:27.730929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-05 12:33:27.730937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-05 12:33:27.730942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-05 12:33:27.730946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-05 12:33:27.730951 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.730956 | orchestrator | 2025-04-05 12:33:27.730961 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-04-05 12:33:27.730966 | orchestrator | Saturday 05 April 2025 12:32:50 +0000 (0:00:00.579) 0:10:57.287 ******** 2025-04-05 12:33:27.730971 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-05 12:33:27.730976 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-05 12:33:27.730981 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-05 12:33:27.730986 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-05 12:33:27.730991 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-05 12:33:27.730996 | orchestrator | 2025-04-05 12:33:27.731001 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-04-05 12:33:27.731006 | orchestrator | Saturday 05 April 2025 12:33:10 +0000 (0:00:20.219) 0:11:17.506 ******** 2025-04-05 12:33:27.731011 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.731015 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.731020 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.731025 | orchestrator | 2025-04-05 12:33:27.731030 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-04-05 12:33:27.731035 | orchestrator | Saturday 05 April 2025 12:33:11 +0000 (0:00:00.385) 0:11:17.892 ******** 2025-04-05 12:33:27.731040 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.731045 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.731050 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.731055 | orchestrator | 2025-04-05 12:33:27.731060 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-04-05 12:33:27.731065 | orchestrator | Saturday 05 April 2025 12:33:11 +0000 (0:00:00.281) 0:11:18.173 ******** 2025-04-05 12:33:27.731070 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.731075 | orchestrator | 2025-04-05 12:33:27.731080 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-04-05 12:33:27.731085 | orchestrator | Saturday 05 April 2025 12:33:11 +0000 (0:00:00.480) 0:11:18.653 ******** 2025-04-05 12:33:27.731090 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.731094 | orchestrator | 2025-04-05 12:33:27.731099 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-04-05 12:33:27.731108 | orchestrator | Saturday 05 April 2025 12:33:12 +0000 (0:00:00.635) 0:11:19.288 ******** 2025-04-05 12:33:27.731113 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.731118 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.731123 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.731128 | orchestrator | 2025-04-05 12:33:27.731135 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-04-05 12:33:27.731140 | orchestrator | Saturday 05 April 2025 12:33:13 +0000 (0:00:00.973) 0:11:20.261 ******** 2025-04-05 12:33:27.731145 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.731149 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.731154 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.731159 | orchestrator | 2025-04-05 12:33:27.731164 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-04-05 12:33:27.731169 | orchestrator | Saturday 05 April 2025 12:33:14 +0000 (0:00:00.945) 0:11:21.207 ******** 2025-04-05 12:33:27.731174 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.731178 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.731183 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.731188 | orchestrator | 2025-04-05 12:33:27.731193 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-04-05 12:33:27.731198 | orchestrator | Saturday 05 April 2025 12:33:16 +0000 (0:00:01.788) 0:11:22.995 ******** 2025-04-05 12:33:27.731203 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-04-05 12:33:27.731208 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-04-05 12:33:27.731213 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-04-05 12:33:27.731218 | orchestrator | 2025-04-05 12:33:27.731223 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-04-05 12:33:27.731227 | orchestrator | Saturday 05 April 2025 12:33:18 +0000 (0:00:01.920) 0:11:24.916 ******** 2025-04-05 12:33:27.731232 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.731237 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:33:27.731242 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:33:27.731247 | orchestrator | 2025-04-05 12:33:27.731252 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-05 12:33:27.731257 | orchestrator | Saturday 05 April 2025 12:33:19 +0000 (0:00:01.458) 0:11:26.374 ******** 2025-04-05 12:33:27.731262 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.731267 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.731271 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.731276 | orchestrator | 2025-04-05 12:33:27.731281 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-04-05 12:33:27.731286 | orchestrator | Saturday 05 April 2025 12:33:20 +0000 (0:00:00.755) 0:11:27.130 ******** 2025-04-05 12:33:27.731291 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:33:27.731296 | orchestrator | 2025-04-05 12:33:27.731301 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-04-05 12:33:27.731306 | orchestrator | Saturday 05 April 2025 12:33:21 +0000 (0:00:00.641) 0:11:27.771 ******** 2025-04-05 12:33:27.731311 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.731315 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.731320 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.731325 | orchestrator | 2025-04-05 12:33:27.731330 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-04-05 12:33:27.731335 | orchestrator | Saturday 05 April 2025 12:33:21 +0000 (0:00:00.306) 0:11:28.077 ******** 2025-04-05 12:33:27.731340 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.731345 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.731350 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.731354 | orchestrator | 2025-04-05 12:33:27.731359 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-04-05 12:33:27.731364 | orchestrator | Saturday 05 April 2025 12:33:22 +0000 (0:00:01.071) 0:11:29.148 ******** 2025-04-05 12:33:27.731369 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:33:27.731377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:33:27.731382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:33:27.731387 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:33:27.731394 | orchestrator | 2025-04-05 12:33:27.731399 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-04-05 12:33:27.731404 | orchestrator | Saturday 05 April 2025 12:33:23 +0000 (0:00:00.978) 0:11:30.127 ******** 2025-04-05 12:33:27.731409 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:33:27.731414 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:33:27.731419 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:33:27.731423 | orchestrator | 2025-04-05 12:33:27.731428 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-05 12:33:27.731433 | orchestrator | Saturday 05 April 2025 12:33:23 +0000 (0:00:00.290) 0:11:30.418 ******** 2025-04-05 12:33:27.731438 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:33:27.731443 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:33:27.731448 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:33:27.731453 | orchestrator | 2025-04-05 12:33:27.731458 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:33:27.731463 | orchestrator | testbed-node-0 : ok=120  changed=33  unreachable=0 failed=0 skipped=274  rescued=0 ignored=0 2025-04-05 12:33:27.731470 | orchestrator | testbed-node-1 : ok=116  changed=32  unreachable=0 failed=0 skipped=263  rescued=0 ignored=0 2025-04-05 12:33:27.731477 | orchestrator | testbed-node-2 : ok=123  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-04-05 12:33:30.719921 | orchestrator | testbed-node-3 : ok=184  changed=50  unreachable=0 failed=0 skipped=366  rescued=0 ignored=0 2025-04-05 12:33:30.720033 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=310  rescued=0 ignored=0 2025-04-05 12:33:30.720051 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=308  rescued=0 ignored=0 2025-04-05 12:33:30.720066 | orchestrator | 2025-04-05 12:33:30.720081 | orchestrator | 2025-04-05 12:33:30.720094 | orchestrator | 2025-04-05 12:33:30.720109 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:33:30.720125 | orchestrator | Saturday 05 April 2025 12:33:24 +0000 (0:00:00.957) 0:11:31.375 ******** 2025-04-05 12:33:30.720139 | orchestrator | =============================================================================== 2025-04-05 12:33:30.720153 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:quincy image -- 37.28s 2025-04-05 12:33:30.720167 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 28.01s 2025-04-05 12:33:30.720181 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.53s 2025-04-05 12:33:30.720195 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 20.22s 2025-04-05 12:33:30.720209 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 16.90s 2025-04-05 12:33:30.720311 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.30s 2025-04-05 12:33:30.720332 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.55s 2025-04-05 12:33:30.720346 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 7.33s 2025-04-05 12:33:30.720360 | orchestrator | ceph-config : create ceph initial directories --------------------------- 6.78s 2025-04-05 12:33:30.720374 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.10s 2025-04-05 12:33:30.720388 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 5.65s 2025-04-05 12:33:30.720510 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 5.59s 2025-04-05 12:33:30.720530 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 4.82s 2025-04-05 12:33:30.720545 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 4.37s 2025-04-05 12:33:30.720560 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 4.24s 2025-04-05 12:33:30.720575 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 4.17s 2025-04-05 12:33:30.720590 | orchestrator | ceph-osd : apply operating system tuning -------------------------------- 4.12s 2025-04-05 12:33:30.720605 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 3.54s 2025-04-05 12:33:30.720620 | orchestrator | ceph-mds : create ceph filesystem --------------------------------------- 2.96s 2025-04-05 12:33:30.720635 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 2.88s 2025-04-05 12:33:30.720650 | orchestrator | 2025-04-05 12:33:27 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:30.720684 | orchestrator | 2025-04-05 12:33:30 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:33:33.748111 | orchestrator | 2025-04-05 12:33:30 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:33:33.748213 | orchestrator | 2025-04-05 12:33:30 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:33:33.748232 | orchestrator | 2025-04-05 12:33:30 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:33.748262 | orchestrator | 2025-04-05 12:33:33 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:33:33.749926 | orchestrator | 2025-04-05 12:33:33 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:33:33.753705 | orchestrator | 2025-04-05 12:33:33 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:33:36.785119 | orchestrator | 2025-04-05 12:33:33 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:36.785245 | orchestrator | 2025-04-05 12:33:36 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:33:36.787260 | orchestrator | 2025-04-05 12:33:36 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:33:36.789494 | orchestrator | 2025-04-05 12:33:36 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:33:36.789800 | orchestrator | 2025-04-05 12:33:36 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:39.827640 | orchestrator | 2025-04-05 12:33:39 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:33:39.831228 | orchestrator | 2025-04-05 12:33:39 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:33:39.831658 | orchestrator | 2025-04-05 12:33:39 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:33:42.855528 | orchestrator | 2025-04-05 12:33:39 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:42.855672 | orchestrator | 2025-04-05 12:33:42 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:33:42.855958 | orchestrator | 2025-04-05 12:33:42 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:33:42.856494 | orchestrator | 2025-04-05 12:33:42 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:33:42.856632 | orchestrator | 2025-04-05 12:33:42 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:45.894652 | orchestrator | 2025-04-05 12:33:45 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:33:45.896067 | orchestrator | 2025-04-05 12:33:45 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:33:45.897961 | orchestrator | 2025-04-05 12:33:45 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:33:48.932831 | orchestrator | 2025-04-05 12:33:45 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:48.932960 | orchestrator | 2025-04-05 12:33:48 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:33:48.934413 | orchestrator | 2025-04-05 12:33:48 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:33:48.935662 | orchestrator | 2025-04-05 12:33:48 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:33:51.983977 | orchestrator | 2025-04-05 12:33:48 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:51.984104 | orchestrator | 2025-04-05 12:33:51 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:33:51.985238 | orchestrator | 2025-04-05 12:33:51 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:33:51.987520 | orchestrator | 2025-04-05 12:33:51 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:33:55.045098 | orchestrator | 2025-04-05 12:33:51 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:55.045227 | orchestrator | 2025-04-05 12:33:55 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:33:55.045642 | orchestrator | 2025-04-05 12:33:55 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:33:55.048165 | orchestrator | 2025-04-05 12:33:55 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:33:58.091500 | orchestrator | 2025-04-05 12:33:55 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:33:58.091628 | orchestrator | 2025-04-05 12:33:58 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:33:58.092897 | orchestrator | 2025-04-05 12:33:58 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:33:58.094347 | orchestrator | 2025-04-05 12:33:58 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:01.133549 | orchestrator | 2025-04-05 12:33:58 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:01.133668 | orchestrator | 2025-04-05 12:34:01 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:01.137109 | orchestrator | 2025-04-05 12:34:01 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:34:01.137544 | orchestrator | 2025-04-05 12:34:01 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:04.182901 | orchestrator | 2025-04-05 12:34:01 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:04.183030 | orchestrator | 2025-04-05 12:34:04 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:04.183860 | orchestrator | 2025-04-05 12:34:04 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:34:04.186140 | orchestrator | 2025-04-05 12:34:04 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:07.239994 | orchestrator | 2025-04-05 12:34:04 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:07.240116 | orchestrator | 2025-04-05 12:34:07 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:07.241611 | orchestrator | 2025-04-05 12:34:07 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:34:07.243404 | orchestrator | 2025-04-05 12:34:07 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:10.294205 | orchestrator | 2025-04-05 12:34:07 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:10.294329 | orchestrator | 2025-04-05 12:34:10 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:10.295378 | orchestrator | 2025-04-05 12:34:10 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:34:10.297305 | orchestrator | 2025-04-05 12:34:10 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:10.297335 | orchestrator | 2025-04-05 12:34:10 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:13.342255 | orchestrator | 2025-04-05 12:34:13 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:13.342910 | orchestrator | 2025-04-05 12:34:13 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:34:13.344639 | orchestrator | 2025-04-05 12:34:13 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:13.344672 | orchestrator | 2025-04-05 12:34:13 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:16.401017 | orchestrator | 2025-04-05 12:34:16 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:16.403211 | orchestrator | 2025-04-05 12:34:16 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:34:16.404977 | orchestrator | 2025-04-05 12:34:16 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:16.405515 | orchestrator | 2025-04-05 12:34:16 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:19.453421 | orchestrator | 2025-04-05 12:34:19 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:19.454842 | orchestrator | 2025-04-05 12:34:19 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:34:19.456177 | orchestrator | 2025-04-05 12:34:19 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:19.456412 | orchestrator | 2025-04-05 12:34:19 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:22.506875 | orchestrator | 2025-04-05 12:34:22 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:22.509608 | orchestrator | 2025-04-05 12:34:22 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:34:22.511841 | orchestrator | 2025-04-05 12:34:22 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:25.563032 | orchestrator | 2025-04-05 12:34:22 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:25.563114 | orchestrator | 2025-04-05 12:34:25 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:25.567928 | orchestrator | 2025-04-05 12:34:25 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:34:25.569378 | orchestrator | 2025-04-05 12:34:25 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:28.622822 | orchestrator | 2025-04-05 12:34:25 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:28.622944 | orchestrator | 2025-04-05 12:34:28 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:28.624110 | orchestrator | 2025-04-05 12:34:28 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:34:28.626390 | orchestrator | 2025-04-05 12:34:28 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:28.626498 | orchestrator | 2025-04-05 12:34:28 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:31.672854 | orchestrator | 2025-04-05 12:34:31 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:31.673825 | orchestrator | 2025-04-05 12:34:31 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:34:31.676260 | orchestrator | 2025-04-05 12:34:31 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:34.725784 | orchestrator | 2025-04-05 12:34:31 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:34.725906 | orchestrator | 2025-04-05 12:34:34 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:34.726792 | orchestrator | 2025-04-05 12:34:34 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state STARTED 2025-04-05 12:34:34.728122 | orchestrator | 2025-04-05 12:34:34 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:34.728622 | orchestrator | 2025-04-05 12:34:34 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:37.780721 | orchestrator | 2025-04-05 12:34:37 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:37.784200 | orchestrator | 2025-04-05 12:34:37 | INFO  | Task e6eda028-a200-46a2-92bd-10f0cde30bb3 is in state SUCCESS 2025-04-05 12:34:37.785941 | orchestrator | 2025-04-05 12:34:37.785984 | orchestrator | 2025-04-05 12:34:37.785999 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:34:37.786014 | orchestrator | 2025-04-05 12:34:37.786074 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:34:37.786293 | orchestrator | Saturday 05 April 2025 12:33:14 +0000 (0:00:00.274) 0:00:00.274 ******** 2025-04-05 12:34:37.786310 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:34:37.786326 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:34:37.786340 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:34:37.786354 | orchestrator | 2025-04-05 12:34:37.786369 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:34:37.786383 | orchestrator | Saturday 05 April 2025 12:33:14 +0000 (0:00:00.391) 0:00:00.666 ******** 2025-04-05 12:34:37.786397 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-04-05 12:34:37.786411 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-04-05 12:34:37.786425 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-04-05 12:34:37.786439 | orchestrator | 2025-04-05 12:34:37.786454 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-04-05 12:34:37.786468 | orchestrator | 2025-04-05 12:34:37.786482 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-05 12:34:37.786496 | orchestrator | Saturday 05 April 2025 12:33:15 +0000 (0:00:00.408) 0:00:01.074 ******** 2025-04-05 12:34:37.786510 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:34:37.786525 | orchestrator | 2025-04-05 12:34:37.786540 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-04-05 12:34:37.786554 | orchestrator | Saturday 05 April 2025 12:33:15 +0000 (0:00:00.508) 0:00:01.583 ******** 2025-04-05 12:34:37.786572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-05 12:34:37.786629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-05 12:34:37.786647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-05 12:34:37.786671 | orchestrator | 2025-04-05 12:34:37.786685 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-04-05 12:34:37.786700 | orchestrator | Saturday 05 April 2025 12:33:17 +0000 (0:00:01.424) 0:00:03.008 ******** 2025-04-05 12:34:37.786713 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:34:37.786728 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:34:37.786767 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:34:37.786782 | orchestrator | 2025-04-05 12:34:37.786796 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-05 12:34:37.786810 | orchestrator | Saturday 05 April 2025 12:33:17 +0000 (0:00:00.250) 0:00:03.258 ******** 2025-04-05 12:34:37.786833 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-04-05 12:34:37.786848 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-04-05 12:34:37.786862 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-04-05 12:34:37.786884 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-04-05 12:34:37.786900 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-04-05 12:34:37.786916 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-04-05 12:34:37.786931 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-04-05 12:34:37.786947 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-04-05 12:34:37.786963 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-04-05 12:34:37.786978 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-04-05 12:34:37.786994 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-04-05 12:34:37.787010 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-04-05 12:34:37.787026 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-04-05 12:34:37.787048 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-04-05 12:34:37.787064 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-04-05 12:34:37.787080 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-04-05 12:34:37.787095 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-04-05 12:34:37.787111 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-04-05 12:34:37.787126 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-04-05 12:34:37.787141 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-04-05 12:34:37.787157 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-04-05 12:34:37.787172 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-04-05 12:34:37.787187 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-04-05 12:34:37.787202 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-04-05 12:34:37.787218 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-04-05 12:34:37.787235 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-04-05 12:34:37.787252 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-04-05 12:34:37.787268 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-04-05 12:34:37.787281 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-04-05 12:34:37.787295 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-04-05 12:34:37.787309 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-04-05 12:34:37.787323 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-04-05 12:34:37.787336 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-04-05 12:34:37.787351 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-04-05 12:34:37.787365 | orchestrator | 2025-04-05 12:34:37.787379 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-05 12:34:37.787393 | orchestrator | Saturday 05 April 2025 12:33:18 +0000 (0:00:00.860) 0:00:04.119 ******** 2025-04-05 12:34:37.787407 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:34:37.787421 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:34:37.787435 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:34:37.787449 | orchestrator | 2025-04-05 12:34:37.787463 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-05 12:34:37.787476 | orchestrator | Saturday 05 April 2025 12:33:18 +0000 (0:00:00.417) 0:00:04.536 ******** 2025-04-05 12:34:37.787491 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.787506 | orchestrator | 2025-04-05 12:34:37.787526 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-05 12:34:37.787548 | orchestrator | Saturday 05 April 2025 12:33:18 +0000 (0:00:00.100) 0:00:04.636 ******** 2025-04-05 12:34:37.787562 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.787584 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:34:37.787600 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:34:37.787615 | orchestrator | 2025-04-05 12:34:37.787629 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-05 12:34:37.787643 | orchestrator | Saturday 05 April 2025 12:33:19 +0000 (0:00:00.368) 0:00:05.005 ******** 2025-04-05 12:34:37.787657 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:34:37.787671 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:34:37.787685 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:34:37.787699 | orchestrator | 2025-04-05 12:34:37.787713 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-05 12:34:37.787727 | orchestrator | Saturday 05 April 2025 12:33:19 +0000 (0:00:00.260) 0:00:05.266 ******** 2025-04-05 12:34:37.787758 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.787773 | orchestrator | 2025-04-05 12:34:37.787787 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-05 12:34:37.787806 | orchestrator | Saturday 05 April 2025 12:33:19 +0000 (0:00:00.213) 0:00:05.479 ******** 2025-04-05 12:34:37.787821 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.787835 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:34:37.787849 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:34:37.787863 | orchestrator | 2025-04-05 12:34:37.787876 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-05 12:34:37.787891 | orchestrator | Saturday 05 April 2025 12:33:20 +0000 (0:00:00.353) 0:00:05.833 ******** 2025-04-05 12:34:37.787904 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:34:37.787918 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:34:37.787932 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:34:37.787946 | orchestrator | 2025-04-05 12:34:37.787960 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-05 12:34:37.787974 | orchestrator | Saturday 05 April 2025 12:33:20 +0000 (0:00:00.369) 0:00:06.202 ******** 2025-04-05 12:34:37.787988 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.788002 | orchestrator | 2025-04-05 12:34:37.788016 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-05 12:34:37.788030 | orchestrator | Saturday 05 April 2025 12:33:20 +0000 (0:00:00.110) 0:00:06.313 ******** 2025-04-05 12:34:37.788043 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.788057 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:34:37.788071 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:34:37.788085 | orchestrator | 2025-04-05 12:34:37.788099 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-05 12:34:37.788113 | orchestrator | Saturday 05 April 2025 12:33:20 +0000 (0:00:00.318) 0:00:06.631 ******** 2025-04-05 12:34:37.788127 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:34:37.788141 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:34:37.788155 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:34:37.788169 | orchestrator | 2025-04-05 12:34:37.788183 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-05 12:34:37.788197 | orchestrator | Saturday 05 April 2025 12:33:21 +0000 (0:00:00.299) 0:00:06.930 ******** 2025-04-05 12:34:37.788211 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.788225 | orchestrator | 2025-04-05 12:34:37.788239 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-05 12:34:37.788253 | orchestrator | Saturday 05 April 2025 12:33:21 +0000 (0:00:00.113) 0:00:07.044 ******** 2025-04-05 12:34:37.788267 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.788281 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:34:37.788296 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:34:37.788310 | orchestrator | 2025-04-05 12:34:37.788324 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-05 12:34:37.788344 | orchestrator | Saturday 05 April 2025 12:33:21 +0000 (0:00:00.335) 0:00:07.379 ******** 2025-04-05 12:34:37.788358 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:34:37.788373 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:34:37.788387 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:34:37.788401 | orchestrator | 2025-04-05 12:34:37.788415 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-05 12:34:37.788428 | orchestrator | Saturday 05 April 2025 12:33:21 +0000 (0:00:00.240) 0:00:07.620 ******** 2025-04-05 12:34:37.788442 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.788456 | orchestrator | 2025-04-05 12:34:37.788470 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-05 12:34:37.788484 | orchestrator | Saturday 05 April 2025 12:33:22 +0000 (0:00:00.165) 0:00:07.786 ******** 2025-04-05 12:34:37.788497 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.788511 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:34:37.788525 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:34:37.788539 | orchestrator | 2025-04-05 12:34:37.788552 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-05 12:34:37.788567 | orchestrator | Saturday 05 April 2025 12:33:22 +0000 (0:00:00.219) 0:00:08.006 ******** 2025-04-05 12:34:37.788580 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:34:37.788594 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:34:37.788608 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:34:37.788622 | orchestrator | 2025-04-05 12:34:37.788636 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-05 12:34:37.788649 | orchestrator | Saturday 05 April 2025 12:33:22 +0000 (0:00:00.427) 0:00:08.433 ******** 2025-04-05 12:34:37.788663 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.788677 | orchestrator | 2025-04-05 12:34:37.788691 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-05 12:34:37.788705 | orchestrator | Saturday 05 April 2025 12:33:22 +0000 (0:00:00.110) 0:00:08.543 ******** 2025-04-05 12:34:37.788718 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.788733 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:34:37.788774 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:34:37.788788 | orchestrator | 2025-04-05 12:34:37.788803 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-05 12:34:37.788823 | orchestrator | Saturday 05 April 2025 12:33:23 +0000 (0:00:00.407) 0:00:08.950 ******** 2025-04-05 12:34:37.788838 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:34:37.788852 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:34:37.788866 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:34:37.788880 | orchestrator | 2025-04-05 12:34:37.788894 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-05 12:34:37.789014 | orchestrator | Saturday 05 April 2025 12:33:23 +0000 (0:00:00.381) 0:00:09.332 ******** 2025-04-05 12:34:37.789030 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.789045 | orchestrator | 2025-04-05 12:34:37.789059 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-05 12:34:37.789073 | orchestrator | Saturday 05 April 2025 12:33:23 +0000 (0:00:00.101) 0:00:09.434 ******** 2025-04-05 12:34:37.789088 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.789102 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:34:37.789116 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:34:37.789130 | orchestrator | 2025-04-05 12:34:37.789150 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-05 12:34:37.789165 | orchestrator | Saturday 05 April 2025 12:33:23 +0000 (0:00:00.319) 0:00:09.753 ******** 2025-04-05 12:34:37.789179 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:34:37.789193 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:34:37.789207 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:34:37.789221 | orchestrator | 2025-04-05 12:34:37.789235 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-05 12:34:37.789249 | orchestrator | Saturday 05 April 2025 12:33:24 +0000 (0:00:00.254) 0:00:10.008 ******** 2025-04-05 12:34:37.789273 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.789288 | orchestrator | 2025-04-05 12:34:37.789302 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-05 12:34:37.789316 | orchestrator | Saturday 05 April 2025 12:33:24 +0000 (0:00:00.339) 0:00:10.347 ******** 2025-04-05 12:34:37.789330 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.789344 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:34:37.789373 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:34:37.789387 | orchestrator | 2025-04-05 12:34:37.789401 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-05 12:34:37.789416 | orchestrator | Saturday 05 April 2025 12:33:24 +0000 (0:00:00.308) 0:00:10.655 ******** 2025-04-05 12:34:37.789429 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:34:37.789443 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:34:37.789457 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:34:37.789471 | orchestrator | 2025-04-05 12:34:37.789486 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-05 12:34:37.789499 | orchestrator | Saturday 05 April 2025 12:33:25 +0000 (0:00:00.344) 0:00:10.999 ******** 2025-04-05 12:34:37.789513 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.789527 | orchestrator | 2025-04-05 12:34:37.789541 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-05 12:34:37.789555 | orchestrator | Saturday 05 April 2025 12:33:25 +0000 (0:00:00.106) 0:00:11.105 ******** 2025-04-05 12:34:37.789569 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.789583 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:34:37.789597 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:34:37.789611 | orchestrator | 2025-04-05 12:34:37.789626 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-05 12:34:37.789642 | orchestrator | Saturday 05 April 2025 12:33:25 +0000 (0:00:00.302) 0:00:11.408 ******** 2025-04-05 12:34:37.789658 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:34:37.789674 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:34:37.789690 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:34:37.789705 | orchestrator | 2025-04-05 12:34:37.789722 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-05 12:34:37.789755 | orchestrator | Saturday 05 April 2025 12:33:25 +0000 (0:00:00.308) 0:00:11.716 ******** 2025-04-05 12:34:37.789772 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.789789 | orchestrator | 2025-04-05 12:34:37.789804 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-05 12:34:37.789820 | orchestrator | Saturday 05 April 2025 12:33:26 +0000 (0:00:00.099) 0:00:11.816 ******** 2025-04-05 12:34:37.789836 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.789851 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:34:37.789867 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:34:37.789883 | orchestrator | 2025-04-05 12:34:37.789898 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-04-05 12:34:37.789913 | orchestrator | Saturday 05 April 2025 12:33:26 +0000 (0:00:00.288) 0:00:12.105 ******** 2025-04-05 12:34:37.789929 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:34:37.789944 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:34:37.789960 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:34:37.789976 | orchestrator | 2025-04-05 12:34:37.789991 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-04-05 12:34:37.790005 | orchestrator | Saturday 05 April 2025 12:33:28 +0000 (0:00:01.739) 0:00:13.845 ******** 2025-04-05 12:34:37.790049 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-04-05 12:34:37.790065 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-04-05 12:34:37.790079 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-04-05 12:34:37.790101 | orchestrator | 2025-04-05 12:34:37.790115 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-04-05 12:34:37.790130 | orchestrator | Saturday 05 April 2025 12:33:30 +0000 (0:00:02.331) 0:00:16.176 ******** 2025-04-05 12:34:37.790143 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-04-05 12:34:37.790157 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-04-05 12:34:37.790171 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-04-05 12:34:37.790185 | orchestrator | 2025-04-05 12:34:37.790207 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-04-05 12:34:37.790222 | orchestrator | Saturday 05 April 2025 12:33:32 +0000 (0:00:02.531) 0:00:18.708 ******** 2025-04-05 12:34:37.790236 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-04-05 12:34:37.790250 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-04-05 12:34:37.790264 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-04-05 12:34:37.790278 | orchestrator | 2025-04-05 12:34:37.790293 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-04-05 12:34:37.790307 | orchestrator | Saturday 05 April 2025 12:33:34 +0000 (0:00:01.588) 0:00:20.296 ******** 2025-04-05 12:34:37.790321 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.790335 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:34:37.790349 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:34:37.790363 | orchestrator | 2025-04-05 12:34:37.790377 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-04-05 12:34:37.790391 | orchestrator | Saturday 05 April 2025 12:33:34 +0000 (0:00:00.274) 0:00:20.571 ******** 2025-04-05 12:34:37.790405 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.790419 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:34:37.790433 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:34:37.790447 | orchestrator | 2025-04-05 12:34:37.790461 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-05 12:34:37.790475 | orchestrator | Saturday 05 April 2025 12:33:35 +0000 (0:00:00.255) 0:00:20.826 ******** 2025-04-05 12:34:37.790489 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:34:37.790503 | orchestrator | 2025-04-05 12:34:37.790523 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-04-05 12:34:37.790537 | orchestrator | Saturday 05 April 2025 12:33:35 +0000 (0:00:00.677) 0:00:21.503 ******** 2025-04-05 12:34:37.790552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-05 12:34:37.790586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-05 12:34:37.790603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-05 12:34:37.790625 | orchestrator | 2025-04-05 12:34:37.790640 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-04-05 12:34:37.790654 | orchestrator | Saturday 05 April 2025 12:33:37 +0000 (0:00:01.365) 0:00:22.869 ******** 2025-04-05 12:34:37.790677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-05 12:34:37.790693 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.790715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-05 12:34:37.790790 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:34:37.790807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-05 12:34:37.790831 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:34:37.790845 | orchestrator | 2025-04-05 12:34:37.790859 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-04-05 12:34:37.790873 | orchestrator | Saturday 05 April 2025 12:33:37 +0000 (0:00:00.649) 0:00:23.518 ******** 2025-04-05 12:34:37.790897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-05 12:34:37.790913 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.790927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-05 12:34:37.790949 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:34:37.790979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-05 12:34:37.790997 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:34:37.791011 | orchestrator | 2025-04-05 12:34:37.791026 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-04-05 12:34:37.791039 | orchestrator | Saturday 05 April 2025 12:33:39 +0000 (0:00:01.470) 0:00:24.988 ******** 2025-04-05 12:34:37.791054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-05 12:34:37.791085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-05 12:34:37.791102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-05 12:34:37.791123 | orchestrator | 2025-04-05 12:34:37.791138 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-05 12:34:37.791152 | orchestrator | Saturday 05 April 2025 12:33:43 +0000 (0:00:04.446) 0:00:29.435 ******** 2025-04-05 12:34:37.791166 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:34:37.791180 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:34:37.791194 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:34:37.791208 | orchestrator | 2025-04-05 12:34:37.791222 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-05 12:34:37.791235 | orchestrator | Saturday 05 April 2025 12:33:44 +0000 (0:00:00.362) 0:00:29.798 ******** 2025-04-05 12:34:37.791249 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:34:37.791263 | orchestrator | 2025-04-05 12:34:37.791282 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-04-05 12:34:40.845180 | orchestrator | Saturday 05 April 2025 12:33:44 +0000 (0:00:00.485) 0:00:30.283 ******** 2025-04-05 12:34:40.845257 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:34:40.845278 | orchestrator | 2025-04-05 12:34:40.845284 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-04-05 12:34:40.845290 | orchestrator | Saturday 05 April 2025 12:33:46 +0000 (0:00:02.198) 0:00:32.482 ******** 2025-04-05 12:34:40.845296 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:34:40.845301 | orchestrator | 2025-04-05 12:34:40.845307 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-04-05 12:34:40.845312 | orchestrator | Saturday 05 April 2025 12:33:49 +0000 (0:00:02.306) 0:00:34.789 ******** 2025-04-05 12:34:40.845317 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:34:40.845322 | orchestrator | 2025-04-05 12:34:40.845328 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-04-05 12:34:40.845333 | orchestrator | Saturday 05 April 2025 12:33:59 +0000 (0:00:10.968) 0:00:45.758 ******** 2025-04-05 12:34:40.845338 | orchestrator | 2025-04-05 12:34:40.845343 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-04-05 12:34:40.845348 | orchestrator | Saturday 05 April 2025 12:34:00 +0000 (0:00:00.062) 0:00:45.820 ******** 2025-04-05 12:34:40.845353 | orchestrator | 2025-04-05 12:34:40.845359 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-04-05 12:34:40.845364 | orchestrator | Saturday 05 April 2025 12:34:00 +0000 (0:00:00.058) 0:00:45.878 ******** 2025-04-05 12:34:40.845382 | orchestrator | 2025-04-05 12:34:40.845388 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-04-05 12:34:40.845393 | orchestrator | Saturday 05 April 2025 12:34:00 +0000 (0:00:00.203) 0:00:46.081 ******** 2025-04-05 12:34:40.845398 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:34:40.845403 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:34:40.845409 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:34:40.845414 | orchestrator | 2025-04-05 12:34:40.845420 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:34:40.845426 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-04-05 12:34:40.845433 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-04-05 12:34:40.845438 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-04-05 12:34:40.845444 | orchestrator | 2025-04-05 12:34:40.845449 | orchestrator | 2025-04-05 12:34:40.845454 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:34:40.845459 | orchestrator | Saturday 05 April 2025 12:34:35 +0000 (0:00:34.690) 0:01:20.772 ******** 2025-04-05 12:34:40.845465 | orchestrator | =============================================================================== 2025-04-05 12:34:40.845470 | orchestrator | horizon : Restart horizon container ------------------------------------ 34.69s 2025-04-05 12:34:40.845475 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 10.97s 2025-04-05 12:34:40.845483 | orchestrator | horizon : Deploy horizon container -------------------------------------- 4.45s 2025-04-05 12:34:40.845489 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.53s 2025-04-05 12:34:40.845494 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.33s 2025-04-05 12:34:40.845499 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.31s 2025-04-05 12:34:40.845505 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.20s 2025-04-05 12:34:40.845510 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.74s 2025-04-05 12:34:40.845515 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.59s 2025-04-05 12:34:40.845520 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.47s 2025-04-05 12:34:40.845525 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.42s 2025-04-05 12:34:40.845531 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.37s 2025-04-05 12:34:40.845536 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.86s 2025-04-05 12:34:40.845541 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.68s 2025-04-05 12:34:40.845546 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.65s 2025-04-05 12:34:40.845553 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.51s 2025-04-05 12:34:40.845558 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.49s 2025-04-05 12:34:40.845563 | orchestrator | horizon : Update policy file name --------------------------------------- 0.43s 2025-04-05 12:34:40.845568 | orchestrator | horizon : Update policy file name --------------------------------------- 0.42s 2025-04-05 12:34:40.845573 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2025-04-05 12:34:40.845579 | orchestrator | 2025-04-05 12:34:37 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:40.845584 | orchestrator | 2025-04-05 12:34:37 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:40.845602 | orchestrator | 2025-04-05 12:34:40 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:40.846295 | orchestrator | 2025-04-05 12:34:40 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:40.846694 | orchestrator | 2025-04-05 12:34:40 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:43.897147 | orchestrator | 2025-04-05 12:34:43 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:43.897540 | orchestrator | 2025-04-05 12:34:43 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:43.897767 | orchestrator | 2025-04-05 12:34:43 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:46.950797 | orchestrator | 2025-04-05 12:34:46 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:46.952151 | orchestrator | 2025-04-05 12:34:46 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:46.952640 | orchestrator | 2025-04-05 12:34:46 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:50.003554 | orchestrator | 2025-04-05 12:34:50 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:50.005106 | orchestrator | 2025-04-05 12:34:50 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:53.060072 | orchestrator | 2025-04-05 12:34:50 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:53.060202 | orchestrator | 2025-04-05 12:34:53 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:53.061370 | orchestrator | 2025-04-05 12:34:53 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:56.106414 | orchestrator | 2025-04-05 12:34:53 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:56.106539 | orchestrator | 2025-04-05 12:34:56 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:56.108264 | orchestrator | 2025-04-05 12:34:56 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:34:59.157820 | orchestrator | 2025-04-05 12:34:56 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:34:59.157945 | orchestrator | 2025-04-05 12:34:59 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:34:59.159378 | orchestrator | 2025-04-05 12:34:59 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:35:02.207585 | orchestrator | 2025-04-05 12:34:59 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:35:02.207718 | orchestrator | 2025-04-05 12:35:02 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:35:02.209179 | orchestrator | 2025-04-05 12:35:02 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:35:05.254939 | orchestrator | 2025-04-05 12:35:02 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:35:05.255087 | orchestrator | 2025-04-05 12:35:05 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:35:05.255824 | orchestrator | 2025-04-05 12:35:05 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:35:08.302935 | orchestrator | 2025-04-05 12:35:05 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:35:08.303066 | orchestrator | 2025-04-05 12:35:08 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:35:08.305056 | orchestrator | 2025-04-05 12:35:08 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:35:08.305487 | orchestrator | 2025-04-05 12:35:08 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:35:11.365141 | orchestrator | 2025-04-05 12:35:11 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:35:11.366794 | orchestrator | 2025-04-05 12:35:11 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:35:14.416662 | orchestrator | 2025-04-05 12:35:11 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:35:14.416841 | orchestrator | 2025-04-05 12:35:14 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:35:14.419491 | orchestrator | 2025-04-05 12:35:14 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:35:17.470060 | orchestrator | 2025-04-05 12:35:14 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:35:17.470183 | orchestrator | 2025-04-05 12:35:17 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:35:17.471715 | orchestrator | 2025-04-05 12:35:17 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:35:17.471886 | orchestrator | 2025-04-05 12:35:17 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:35:20.525578 | orchestrator | 2025-04-05 12:35:20 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:35:20.529629 | orchestrator | 2025-04-05 12:35:20 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:35:20.530101 | orchestrator | 2025-04-05 12:35:20 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:35:23.580295 | orchestrator | 2025-04-05 12:35:23 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state STARTED 2025-04-05 12:35:26.626396 | orchestrator | 2025-04-05 12:35:23 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:35:26.626471 | orchestrator | 2025-04-05 12:35:23 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:35:26.626490 | orchestrator | 2025-04-05 12:35:26.628077 | orchestrator | 2025-04-05 12:35:26 | INFO  | Task fe088ff9-7b38-4ca2-b5dc-22afd135da17 is in state SUCCESS 2025-04-05 12:35:26.628103 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-05 12:35:26.628110 | orchestrator | 2025-04-05 12:35:26.628116 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-04-05 12:35:26.628122 | orchestrator | 2025-04-05 12:35:26.628128 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-04-05 12:35:26.628133 | orchestrator | Saturday 05 April 2025 12:33:29 +0000 (0:00:01.081) 0:00:01.082 ******** 2025-04-05 12:35:26.628140 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:35:26.628147 | orchestrator | 2025-04-05 12:35:26.628152 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-04-05 12:35:26.628157 | orchestrator | Saturday 05 April 2025 12:33:30 +0000 (0:00:00.488) 0:00:01.570 ******** 2025-04-05 12:35:26.628164 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-04-05 12:35:26.628170 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-04-05 12:35:26.628175 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-04-05 12:35:26.628180 | orchestrator | 2025-04-05 12:35:26.628186 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-04-05 12:35:26.628191 | orchestrator | Saturday 05 April 2025 12:33:31 +0000 (0:00:00.759) 0:00:02.330 ******** 2025-04-05 12:35:26.628196 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:35:26.628202 | orchestrator | 2025-04-05 12:35:26.628208 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-04-05 12:35:26.628227 | orchestrator | Saturday 05 April 2025 12:33:31 +0000 (0:00:00.611) 0:00:02.941 ******** 2025-04-05 12:35:26.628233 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:35:26.628239 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:35:26.628244 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:35:26.628249 | orchestrator | 2025-04-05 12:35:26.628255 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-04-05 12:35:26.628260 | orchestrator | Saturday 05 April 2025 12:33:32 +0000 (0:00:00.589) 0:00:03.531 ******** 2025-04-05 12:35:26.628265 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:35:26.628271 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:35:26.628276 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:35:26.628281 | orchestrator | 2025-04-05 12:35:26.628287 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-04-05 12:35:26.628292 | orchestrator | Saturday 05 April 2025 12:33:32 +0000 (0:00:00.253) 0:00:03.784 ******** 2025-04-05 12:35:26.628297 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:35:26.628303 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:35:26.628308 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:35:26.628313 | orchestrator | 2025-04-05 12:35:26.628329 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-04-05 12:35:26.628334 | orchestrator | Saturday 05 April 2025 12:33:33 +0000 (0:00:00.748) 0:00:04.533 ******** 2025-04-05 12:35:26.628340 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:35:26.628355 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:35:26.628360 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:35:26.628368 | orchestrator | 2025-04-05 12:35:26.628373 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-04-05 12:35:26.628379 | orchestrator | Saturday 05 April 2025 12:33:33 +0000 (0:00:00.288) 0:00:04.822 ******** 2025-04-05 12:35:26.628384 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:35:26.628389 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:35:26.628394 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:35:26.628399 | orchestrator | 2025-04-05 12:35:26.628404 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-04-05 12:35:26.628409 | orchestrator | Saturday 05 April 2025 12:33:33 +0000 (0:00:00.282) 0:00:05.104 ******** 2025-04-05 12:35:26.628414 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:35:26.628419 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:35:26.628424 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:35:26.628429 | orchestrator | 2025-04-05 12:35:26.628435 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-04-05 12:35:26.628440 | orchestrator | Saturday 05 April 2025 12:33:34 +0000 (0:00:00.288) 0:00:05.393 ******** 2025-04-05 12:35:26.628445 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.628451 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.628456 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.628462 | orchestrator | 2025-04-05 12:35:26.628467 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-04-05 12:35:26.628472 | orchestrator | Saturday 05 April 2025 12:33:34 +0000 (0:00:00.396) 0:00:05.789 ******** 2025-04-05 12:35:26.628477 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:35:26.628482 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:35:26.628487 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:35:26.628492 | orchestrator | 2025-04-05 12:35:26.628497 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-04-05 12:35:26.628502 | orchestrator | Saturday 05 April 2025 12:33:34 +0000 (0:00:00.255) 0:00:06.044 ******** 2025-04-05 12:35:26.628507 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-05 12:35:26.628512 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-05 12:35:26.628517 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-05 12:35:26.628522 | orchestrator | 2025-04-05 12:35:26.628527 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-04-05 12:35:26.628557 | orchestrator | Saturday 05 April 2025 12:33:35 +0000 (0:00:00.598) 0:00:06.643 ******** 2025-04-05 12:35:26.628563 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:35:26.628568 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:35:26.628573 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:35:26.628578 | orchestrator | 2025-04-05 12:35:26.628583 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-04-05 12:35:26.628588 | orchestrator | Saturday 05 April 2025 12:33:35 +0000 (0:00:00.454) 0:00:07.098 ******** 2025-04-05 12:35:26.628597 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-05 12:35:26.628603 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-05 12:35:26.628608 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-05 12:35:26.628613 | orchestrator | 2025-04-05 12:35:26.628618 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-04-05 12:35:26.628623 | orchestrator | Saturday 05 April 2025 12:33:37 +0000 (0:00:02.010) 0:00:09.108 ******** 2025-04-05 12:35:26.628628 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-05 12:35:26.628634 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-05 12:35:26.628639 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-05 12:35:26.628644 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.628650 | orchestrator | 2025-04-05 12:35:26.628655 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-04-05 12:35:26.628660 | orchestrator | Saturday 05 April 2025 12:33:38 +0000 (0:00:00.445) 0:00:09.554 ******** 2025-04-05 12:35:26.628666 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-05 12:35:26.628673 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-05 12:35:26.628679 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-05 12:35:26.628684 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.628689 | orchestrator | 2025-04-05 12:35:26.628695 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-04-05 12:35:26.628701 | orchestrator | Saturday 05 April 2025 12:33:38 +0000 (0:00:00.680) 0:00:10.234 ******** 2025-04-05 12:35:26.628708 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-05 12:35:26.628717 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-05 12:35:26.628723 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-05 12:35:26.628764 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.628771 | orchestrator | 2025-04-05 12:35:26.628777 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-04-05 12:35:26.628782 | orchestrator | Saturday 05 April 2025 12:33:39 +0000 (0:00:00.142) 0:00:10.377 ******** 2025-04-05 12:35:26.628790 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'f02557c0af4b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-04-05 12:33:36.585368', 'end': '2025-04-05 12:33:36.616474', 'delta': '0:00:00.031106', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f02557c0af4b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-04-05 12:35:26.628805 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '96f6380782e2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-04-05 12:33:37.138114', 'end': '2025-04-05 12:33:37.160299', 'delta': '0:00:00.022185', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['96f6380782e2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-04-05 12:35:26.628813 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '0f45e3a12268', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-04-05 12:33:37.590075', 'end': '2025-04-05 12:33:37.616221', 'delta': '0:00:00.026146', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0f45e3a12268'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-04-05 12:35:26.628818 | orchestrator | 2025-04-05 12:35:26.628824 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-04-05 12:35:26.628833 | orchestrator | Saturday 05 April 2025 12:33:39 +0000 (0:00:00.185) 0:00:10.563 ******** 2025-04-05 12:35:26.628839 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:35:26.628845 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:35:26.628850 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:35:26.628856 | orchestrator | 2025-04-05 12:35:26.628861 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-04-05 12:35:26.628867 | orchestrator | Saturday 05 April 2025 12:33:39 +0000 (0:00:00.450) 0:00:11.013 ******** 2025-04-05 12:35:26.628873 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-04-05 12:35:26.628879 | orchestrator | 2025-04-05 12:35:26.628884 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-04-05 12:35:26.628890 | orchestrator | Saturday 05 April 2025 12:33:40 +0000 (0:00:01.231) 0:00:12.244 ******** 2025-04-05 12:35:26.628896 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.628902 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.628907 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.628913 | orchestrator | 2025-04-05 12:35:26.628919 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-04-05 12:35:26.628928 | orchestrator | Saturday 05 April 2025 12:33:41 +0000 (0:00:00.385) 0:00:12.629 ******** 2025-04-05 12:35:26.628934 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.628940 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.628945 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.628951 | orchestrator | 2025-04-05 12:35:26.628957 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-05 12:35:26.628962 | orchestrator | Saturday 05 April 2025 12:33:41 +0000 (0:00:00.430) 0:00:13.060 ******** 2025-04-05 12:35:26.628968 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.628974 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.628979 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.628985 | orchestrator | 2025-04-05 12:35:26.628991 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-04-05 12:35:26.628997 | orchestrator | Saturday 05 April 2025 12:33:42 +0000 (0:00:00.277) 0:00:13.338 ******** 2025-04-05 12:35:26.629002 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:35:26.629008 | orchestrator | 2025-04-05 12:35:26.629013 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-04-05 12:35:26.629019 | orchestrator | Saturday 05 April 2025 12:33:42 +0000 (0:00:00.099) 0:00:13.438 ******** 2025-04-05 12:35:26.629025 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.629030 | orchestrator | 2025-04-05 12:35:26.629036 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-05 12:35:26.629042 | orchestrator | Saturday 05 April 2025 12:33:42 +0000 (0:00:00.175) 0:00:13.614 ******** 2025-04-05 12:35:26.629048 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.629053 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.629058 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.629063 | orchestrator | 2025-04-05 12:35:26.629068 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-04-05 12:35:26.629073 | orchestrator | Saturday 05 April 2025 12:33:42 +0000 (0:00:00.351) 0:00:13.965 ******** 2025-04-05 12:35:26.629077 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.629082 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.629087 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.629092 | orchestrator | 2025-04-05 12:35:26.629097 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-04-05 12:35:26.629102 | orchestrator | Saturday 05 April 2025 12:33:42 +0000 (0:00:00.257) 0:00:14.223 ******** 2025-04-05 12:35:26.629107 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.629112 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.629117 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.629122 | orchestrator | 2025-04-05 12:35:26.629127 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-04-05 12:35:26.629132 | orchestrator | Saturday 05 April 2025 12:33:43 +0000 (0:00:00.301) 0:00:14.524 ******** 2025-04-05 12:35:26.629137 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.629142 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.629149 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.629155 | orchestrator | 2025-04-05 12:35:26.629160 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-04-05 12:35:26.629165 | orchestrator | Saturday 05 April 2025 12:33:43 +0000 (0:00:00.308) 0:00:14.833 ******** 2025-04-05 12:35:26.629170 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.629175 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.629180 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.629185 | orchestrator | 2025-04-05 12:35:26.629190 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-04-05 12:35:26.629195 | orchestrator | Saturday 05 April 2025 12:33:43 +0000 (0:00:00.398) 0:00:15.231 ******** 2025-04-05 12:35:26.629200 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.629205 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.629210 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.629219 | orchestrator | 2025-04-05 12:35:26.629292 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-04-05 12:35:26.629298 | orchestrator | Saturday 05 April 2025 12:33:44 +0000 (0:00:00.309) 0:00:15.541 ******** 2025-04-05 12:35:26.629303 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.629312 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.629317 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.629323 | orchestrator | 2025-04-05 12:35:26.629329 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-04-05 12:35:26.629334 | orchestrator | Saturday 05 April 2025 12:33:44 +0000 (0:00:00.273) 0:00:15.815 ******** 2025-04-05 12:35:26.629341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ad0d437a--29fb--56b5--bf7c--f26bd837f294-osd--block--ad0d437a--29fb--56b5--bf7c--f26bd837f294', 'dm-uuid-LVM-9ZdkthWXVB6K3Rmf2WfQnBTk4e9Oc36kc238xngOyUJFcgJs2g5MZoa4Lbz3mwoF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4ecef128--47ae--5e8f--9b67--b09b9dbd9f26-osd--block--4ecef128--47ae--5e8f--9b67--b09b9dbd9f26', 'dm-uuid-LVM-O2OjUdnL7tVfem3dUvez9g72jq9uzkpPOzIgKKXfa1U0LwH2tyULlIbin9e9eTGE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eb474160--46dc--5c48--a12b--143126b3371a-osd--block--eb474160--46dc--5c48--a12b--143126b3371a', 'dm-uuid-LVM-7HDYOGMyP8dxtEsSvrd50kzn6zwf4y4nLiday3eiDhW1FE1LBnmZ2FcrZgrqYPJF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bddbd264--0785--5bf3--9ea2--553c515bd099-osd--block--bddbd264--0785--5bf3--9ea2--553c515bd099', 'dm-uuid-LVM-wXyo7BPVXJEbgjsoz8QBe2jweYiasOx7UfIeso1riU79qhOPju7RRnDlDOHFwIKP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04', 'scsi-SQEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04-part1', 'scsi-SQEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04-part14', 'scsi-SQEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04-part15', 'scsi-SQEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04-part16', 'scsi-SQEMU_QEMU_HARDDISK_e0b1f4d1-5fd6-4d12-9233-5cdd241bfb04-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:35:26.629439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ad0d437a--29fb--56b5--bf7c--f26bd837f294-osd--block--ad0d437a--29fb--56b5--bf7c--f26bd837f294'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Bikryc-bkfS-JHcM-Fr5U-w3Rx-RKjF-C9Cneo', 'scsi-0QEMU_QEMU_HARDDISK_4656da48-57a2-4eb8-982a-d76718d1cb02', 'scsi-SQEMU_QEMU_HARDDISK_4656da48-57a2-4eb8-982a-d76718d1cb02'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:35:26.629446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4ecef128--47ae--5e8f--9b67--b09b9dbd9f26-osd--block--4ecef128--47ae--5e8f--9b67--b09b9dbd9f26'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-efD1dB-Y43d-aAs1-aK8m-F1ij-mW5L-dkSqkj', 'scsi-0QEMU_QEMU_HARDDISK_213baff1-89a7-4ff7-8a44-f121feb76d26', 'scsi-SQEMU_QEMU_HARDDISK_213baff1-89a7-4ff7-8a44-f121feb76d26'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:35:26.629457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff9999ad-bea3-493e-9af1-c705049c2ab2', 'scsi-SQEMU_QEMU_HARDDISK_ff9999ad-bea3-493e-9af1-c705049c2ab2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:35:26.629464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-05-11-40-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:35:26.629485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629491 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.629496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4aac11a6--844c--526d--9ac8--c50cbafa4162-osd--block--4aac11a6--844c--526d--9ac8--c50cbafa4162', 'dm-uuid-LVM-dZawb3y1Hz1eMnyCpwqDT5tztIuIALPyI0eZJi1cB8OJ2LbGpLSdCGz3xQOB4NOM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b2d6610--beab--5485--bcb7--dfee77450e0c-osd--block--7b2d6610--beab--5485--bcb7--dfee77450e0c', 'dm-uuid-LVM-hizAQ83Not4iqaZEez7Dtk8reUvUJykOQMS3puzKQqAPOViDD6XBQSPE0X2FbHH2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03', 'scsi-SQEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03-part1', 'scsi-SQEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03-part14', 'scsi-SQEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03-part15', 'scsi-SQEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03-part16', 'scsi-SQEMU_QEMU_HARDDISK_e366d25b-af81-4f6a-8721-ed881c3a6b03-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:35:26.629588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--eb474160--46dc--5c48--a12b--143126b3371a-osd--block--eb474160--46dc--5c48--a12b--143126b3371a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NFMzs0-uHDl-wQGu-jf8c-QY8l-0ieC-AMxQZc', 'scsi-0QEMU_QEMU_HARDDISK_5d2b1a52-3655-4f66-b4c6-42f0360176a6', 'scsi-SQEMU_QEMU_HARDDISK_5d2b1a52-3655-4f66-b4c6-42f0360176a6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:35:26.629605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-05 12:35:26.629614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f', 'scsi-SQEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f-part1', 'scsi-SQEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f-part14', 'scsi-SQEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f-part15', 'scsi-SQEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f-part16', 'scsi-SQEMU_QEMU_HARDDISK_a3485950-3182-4145-b5a1-ad5c5b1bfb6f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:35:26.629626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--bddbd264--0785--5bf3--9ea2--553c515bd099-osd--block--bddbd264--0785--5bf3--9ea2--553c515bd099'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4VJK7G-rqjJ-LOuM-gOcG-I4bi-wxKo-8lYNZZ', 'scsi-0QEMU_QEMU_HARDDISK_ba8d5f0c-914f-4739-9d89-312c5c9b23ff', 'scsi-SQEMU_QEMU_HARDDISK_ba8d5f0c-914f-4739-9d89-312c5c9b23ff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:35:26.629632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4aac11a6--844c--526d--9ac8--c50cbafa4162-osd--block--4aac11a6--844c--526d--9ac8--c50cbafa4162'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-t0385u-0YCw-OTnA-TMW3-Jmvo-qOah-ALCvFl', 'scsi-0QEMU_QEMU_HARDDISK_3319eb17-1f94-4384-b4eb-d4656240927c', 'scsi-SQEMU_QEMU_HARDDISK_3319eb17-1f94-4384-b4eb-d4656240927c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:35:26.629638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7b2d6610--beab--5485--bcb7--dfee77450e0c-osd--block--7b2d6610--beab--5485--bcb7--dfee77450e0c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zswFWI-bIbY-I3Br-690K-1GYU-iHSa-sg7cSi', 'scsi-0QEMU_QEMU_HARDDISK_1b7be43a-8a0c-4734-8b26-2b6a058e961f', 'scsi-SQEMU_QEMU_HARDDISK_1b7be43a-8a0c-4734-8b26-2b6a058e961f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:35:26.629644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af9ec2c6-8790-4d7b-8704-1ac1d2bb5c9f', 'scsi-SQEMU_QEMU_HARDDISK_af9ec2c6-8790-4d7b-8704-1ac1d2bb5c9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:35:26.629652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfed707b-504f-4ce7-a138-034721a1d783', 'scsi-SQEMU_QEMU_HARDDISK_cfed707b-504f-4ce7-a138-034721a1d783'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:35:26.629662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-05-11-40-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:35:26.629668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-05-11-40-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-05 12:35:26.629674 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.629679 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.629685 | orchestrator | 2025-04-05 12:35:26.629690 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-04-05 12:35:26.629696 | orchestrator | Saturday 05 April 2025 12:33:45 +0000 (0:00:00.566) 0:00:16.381 ******** 2025-04-05 12:35:26.629701 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-04-05 12:35:26.629707 | orchestrator | 2025-04-05 12:35:26.629712 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-04-05 12:35:26.629720 | orchestrator | Saturday 05 April 2025 12:33:46 +0000 (0:00:01.038) 0:00:17.420 ******** 2025-04-05 12:35:26.629725 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:35:26.629745 | orchestrator | 2025-04-05 12:35:26.629751 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-04-05 12:35:26.629756 | orchestrator | Saturday 05 April 2025 12:33:46 +0000 (0:00:00.251) 0:00:17.671 ******** 2025-04-05 12:35:26.629762 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:35:26.629767 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:35:26.629773 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:35:26.629778 | orchestrator | 2025-04-05 12:35:26.629784 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-04-05 12:35:26.629790 | orchestrator | Saturday 05 April 2025 12:33:46 +0000 (0:00:00.323) 0:00:17.995 ******** 2025-04-05 12:35:26.629795 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:35:26.629801 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:35:26.629806 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:35:26.629811 | orchestrator | 2025-04-05 12:35:26.629817 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-04-05 12:35:26.629822 | orchestrator | Saturday 05 April 2025 12:33:47 +0000 (0:00:00.606) 0:00:18.602 ******** 2025-04-05 12:35:26.629828 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:35:26.629833 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:35:26.629839 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:35:26.629844 | orchestrator | 2025-04-05 12:35:26.629850 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-05 12:35:26.629859 | orchestrator | Saturday 05 April 2025 12:33:47 +0000 (0:00:00.273) 0:00:18.876 ******** 2025-04-05 12:35:26.629864 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:35:26.629869 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:35:26.629875 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:35:26.629880 | orchestrator | 2025-04-05 12:35:26.629886 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-05 12:35:26.629891 | orchestrator | Saturday 05 April 2025 12:33:48 +0000 (0:00:00.764) 0:00:19.640 ******** 2025-04-05 12:35:26.629897 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.629902 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.629908 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.629913 | orchestrator | 2025-04-05 12:35:26.629919 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-05 12:35:26.629924 | orchestrator | Saturday 05 April 2025 12:33:48 +0000 (0:00:00.288) 0:00:19.928 ******** 2025-04-05 12:35:26.629930 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.629935 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.629940 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.629946 | orchestrator | 2025-04-05 12:35:26.629951 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-05 12:35:26.629957 | orchestrator | Saturday 05 April 2025 12:33:49 +0000 (0:00:00.431) 0:00:20.360 ******** 2025-04-05 12:35:26.629962 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.629968 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.629973 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.629979 | orchestrator | 2025-04-05 12:35:26.629984 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-04-05 12:35:26.629989 | orchestrator | Saturday 05 April 2025 12:33:49 +0000 (0:00:00.320) 0:00:20.680 ******** 2025-04-05 12:35:26.629995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-05 12:35:26.630001 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-05 12:35:26.630006 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-05 12:35:26.630012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-05 12:35:26.630043 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.630049 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-05 12:35:26.630054 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-05 12:35:26.630059 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-05 12:35:26.630064 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.630069 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-05 12:35:26.630074 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-05 12:35:26.630079 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.630084 | orchestrator | 2025-04-05 12:35:26.630090 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-04-05 12:35:26.630098 | orchestrator | Saturday 05 April 2025 12:33:50 +0000 (0:00:01.210) 0:00:21.891 ******** 2025-04-05 12:35:26.630103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-05 12:35:26.630108 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-05 12:35:26.630130 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-05 12:35:26.630136 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-05 12:35:26.630141 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-05 12:35:26.630147 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-05 12:35:26.630152 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.630158 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-05 12:35:26.630163 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.630169 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-05 12:35:26.630174 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-05 12:35:26.630186 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.630192 | orchestrator | 2025-04-05 12:35:26.630197 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-04-05 12:35:26.630203 | orchestrator | Saturday 05 April 2025 12:33:51 +0000 (0:00:00.683) 0:00:22.574 ******** 2025-04-05 12:35:26.630208 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-04-05 12:35:26.630214 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-04-05 12:35:26.630220 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-04-05 12:35:26.630225 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-04-05 12:35:26.630231 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-04-05 12:35:26.630236 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-04-05 12:35:26.630242 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-04-05 12:35:26.630247 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-04-05 12:35:26.630253 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-04-05 12:35:26.630258 | orchestrator | 2025-04-05 12:35:26.630264 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-04-05 12:35:26.630269 | orchestrator | Saturday 05 April 2025 12:33:52 +0000 (0:00:01.664) 0:00:24.239 ******** 2025-04-05 12:35:26.630275 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-05 12:35:26.630280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-05 12:35:26.630286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-05 12:35:26.630291 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.630297 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-05 12:35:26.630302 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-05 12:35:26.630307 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-05 12:35:26.630313 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.630318 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-05 12:35:26.630324 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-05 12:35:26.630329 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-05 12:35:26.630334 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.630340 | orchestrator | 2025-04-05 12:35:26.630345 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-04-05 12:35:26.630351 | orchestrator | Saturday 05 April 2025 12:33:53 +0000 (0:00:00.763) 0:00:25.002 ******** 2025-04-05 12:35:26.630356 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-05 12:35:26.630362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-05 12:35:26.630367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-05 12:35:26.630372 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-05 12:35:26.630378 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-05 12:35:26.630386 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-05 12:35:26.630392 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.630398 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.630403 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-05 12:35:26.630408 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-05 12:35:26.630414 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-05 12:35:26.630419 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.630425 | orchestrator | 2025-04-05 12:35:26.630430 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-04-05 12:35:26.630435 | orchestrator | Saturday 05 April 2025 12:33:54 +0000 (0:00:00.559) 0:00:25.562 ******** 2025-04-05 12:35:26.630441 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-05 12:35:26.630450 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-05 12:35:26.630455 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-05 12:35:26.630461 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.630466 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-05 12:35:26.630472 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-05 12:35:26.630477 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-05 12:35:26.630483 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.630488 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-05 12:35:26.630497 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-05 12:35:26.630503 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-05 12:35:26.630508 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.630514 | orchestrator | 2025-04-05 12:35:26.630519 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-04-05 12:35:26.630524 | orchestrator | Saturday 05 April 2025 12:33:54 +0000 (0:00:00.394) 0:00:25.956 ******** 2025-04-05 12:35:26.630530 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:35:26.630535 | orchestrator | 2025-04-05 12:35:26.630541 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-05 12:35:26.630547 | orchestrator | Saturday 05 April 2025 12:33:55 +0000 (0:00:00.855) 0:00:26.812 ******** 2025-04-05 12:35:26.630552 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.630558 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.630563 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.630568 | orchestrator | 2025-04-05 12:35:26.630574 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-05 12:35:26.630579 | orchestrator | Saturday 05 April 2025 12:33:55 +0000 (0:00:00.369) 0:00:27.182 ******** 2025-04-05 12:35:26.630585 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.630590 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.630596 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.630601 | orchestrator | 2025-04-05 12:35:26.630609 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-05 12:35:26.630615 | orchestrator | Saturday 05 April 2025 12:33:56 +0000 (0:00:00.343) 0:00:27.525 ******** 2025-04-05 12:35:26.630620 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.630626 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.630631 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.630637 | orchestrator | 2025-04-05 12:35:26.630642 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-05 12:35:26.630648 | orchestrator | Saturday 05 April 2025 12:33:56 +0000 (0:00:00.336) 0:00:27.861 ******** 2025-04-05 12:35:26.630653 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:35:26.630662 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:35:26.630668 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:35:26.630674 | orchestrator | 2025-04-05 12:35:26.630679 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-05 12:35:26.630685 | orchestrator | Saturday 05 April 2025 12:33:57 +0000 (0:00:00.645) 0:00:28.507 ******** 2025-04-05 12:35:26.630690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:35:26.630696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:35:26.630701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:35:26.630710 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.630718 | orchestrator | 2025-04-05 12:35:26.630724 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-05 12:35:26.630729 | orchestrator | Saturday 05 April 2025 12:33:57 +0000 (0:00:00.405) 0:00:28.912 ******** 2025-04-05 12:35:26.630747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:35:26.630752 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:35:26.630758 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:35:26.630763 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.630769 | orchestrator | 2025-04-05 12:35:26.630774 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-05 12:35:26.630780 | orchestrator | Saturday 05 April 2025 12:33:58 +0000 (0:00:00.397) 0:00:29.309 ******** 2025-04-05 12:35:26.630785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:35:26.630791 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:35:26.630796 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:35:26.630802 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.630807 | orchestrator | 2025-04-05 12:35:26.630813 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:35:26.630818 | orchestrator | Saturday 05 April 2025 12:33:58 +0000 (0:00:00.388) 0:00:29.698 ******** 2025-04-05 12:35:26.630824 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:35:26.630829 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:35:26.630835 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:35:26.630840 | orchestrator | 2025-04-05 12:35:26.630846 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-05 12:35:26.630851 | orchestrator | Saturday 05 April 2025 12:33:58 +0000 (0:00:00.341) 0:00:30.040 ******** 2025-04-05 12:35:26.630857 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-05 12:35:26.630862 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-04-05 12:35:26.630868 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-04-05 12:35:26.630874 | orchestrator | 2025-04-05 12:35:26.630879 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-05 12:35:26.630884 | orchestrator | Saturday 05 April 2025 12:33:59 +0000 (0:00:00.549) 0:00:30.589 ******** 2025-04-05 12:35:26.630890 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.630895 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.630901 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.630906 | orchestrator | 2025-04-05 12:35:26.630912 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-05 12:35:26.630917 | orchestrator | Saturday 05 April 2025 12:33:59 +0000 (0:00:00.501) 0:00:31.091 ******** 2025-04-05 12:35:26.630923 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.630928 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.630934 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.630939 | orchestrator | 2025-04-05 12:35:26.630945 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-05 12:35:26.630953 | orchestrator | Saturday 05 April 2025 12:34:00 +0000 (0:00:00.341) 0:00:31.433 ******** 2025-04-05 12:35:26.630959 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-05 12:35:26.630964 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.630970 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-05 12:35:26.630976 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.630981 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-05 12:35:26.630987 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.630992 | orchestrator | 2025-04-05 12:35:26.630998 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-05 12:35:26.631003 | orchestrator | Saturday 05 April 2025 12:34:00 +0000 (0:00:00.502) 0:00:31.936 ******** 2025-04-05 12:35:26.631009 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-05 12:35:26.631018 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.631024 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-05 12:35:26.631029 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.631035 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-05 12:35:26.631041 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.631046 | orchestrator | 2025-04-05 12:35:26.631052 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-05 12:35:26.631057 | orchestrator | Saturday 05 April 2025 12:34:00 +0000 (0:00:00.315) 0:00:32.251 ******** 2025-04-05 12:35:26.631063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-05 12:35:26.631068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-05 12:35:26.631074 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-05 12:35:26.631079 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-05 12:35:26.631085 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-05 12:35:26.631090 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-05 12:35:26.631096 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.631101 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-05 12:35:26.631107 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-05 12:35:26.631112 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.631117 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-05 12:35:26.631123 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.631128 | orchestrator | 2025-04-05 12:35:26.631134 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-04-05 12:35:26.631139 | orchestrator | Saturday 05 April 2025 12:34:02 +0000 (0:00:01.124) 0:00:33.375 ******** 2025-04-05 12:35:26.631144 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.631150 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.631155 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:35:26.631161 | orchestrator | 2025-04-05 12:35:26.631166 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-04-05 12:35:26.631172 | orchestrator | Saturday 05 April 2025 12:34:02 +0000 (0:00:00.347) 0:00:33.723 ******** 2025-04-05 12:35:26.631177 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-05 12:35:26.631182 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-05 12:35:26.631188 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-05 12:35:26.631193 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-04-05 12:35:26.631199 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-05 12:35:26.631207 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-05 12:35:26.631212 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-05 12:35:26.631218 | orchestrator | 2025-04-05 12:35:26.631223 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-04-05 12:35:26.631229 | orchestrator | Saturday 05 April 2025 12:34:03 +0000 (0:00:01.072) 0:00:34.796 ******** 2025-04-05 12:35:26.631234 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-05 12:35:26.631240 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-05 12:35:26.631245 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-05 12:35:26.631251 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-04-05 12:35:26.631256 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-05 12:35:26.631265 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-05 12:35:26.631270 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-05 12:35:26.631276 | orchestrator | 2025-04-05 12:35:26.631281 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-04-05 12:35:26.631287 | orchestrator | Saturday 05 April 2025 12:34:06 +0000 (0:00:02.657) 0:00:37.453 ******** 2025-04-05 12:35:26.631293 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:35:26.631298 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:35:26.631304 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-04-05 12:35:26.631309 | orchestrator | 2025-04-05 12:35:26.631315 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-04-05 12:35:26.631323 | orchestrator | Saturday 05 April 2025 12:34:06 +0000 (0:00:00.657) 0:00:38.110 ******** 2025-04-05 12:35:26.631330 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-05 12:35:26.631337 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-05 12:35:26.631343 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-05 12:35:26.631348 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-05 12:35:26.631354 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-05 12:35:26.631359 | orchestrator | 2025-04-05 12:35:26.631365 | orchestrator | TASK [generate keys] *********************************************************** 2025-04-05 12:35:26.631371 | orchestrator | Saturday 05 April 2025 12:34:43 +0000 (0:00:36.560) 0:01:14.671 ******** 2025-04-05 12:35:26.631376 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:35:26.631382 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:35:26.631387 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:35:26.631393 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:35:26.631398 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:35:26.631404 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:35:26.631409 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-04-05 12:35:26.631415 | orchestrator | 2025-04-05 12:35:26.631420 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-04-05 12:35:26.631426 | orchestrator | Saturday 05 April 2025 12:34:59 +0000 (0:00:16.271) 0:01:30.943 ******** 2025-04-05 12:35:26.631431 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:35:26.631437 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:35:26.631445 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:35:26.631451 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:35:26.631456 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:35:26.631462 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:35:26.631467 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-05 12:35:26.631473 | orchestrator | 2025-04-05 12:35:26.631479 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-04-05 12:35:26.631488 | orchestrator | Saturday 05 April 2025 12:35:08 +0000 (0:00:08.847) 0:01:39.790 ******** 2025-04-05 12:35:26.631494 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:35:26.631499 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-05 12:35:26.631505 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-05 12:35:26.631510 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:35:26.631516 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-05 12:35:26.631521 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-05 12:35:26.631527 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:35:26.631532 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-05 12:35:26.631538 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-05 12:35:26.631543 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:35:26.631549 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-05 12:35:26.631557 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-05 12:35:29.686222 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:35:29.686325 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-05 12:35:29.686344 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-05 12:35:29.686359 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-05 12:35:29.686373 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-05 12:35:29.686387 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-05 12:35:29.686402 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-04-05 12:35:29.686416 | orchestrator | 2025-04-05 12:35:29.686431 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:35:29.686447 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-04-05 12:35:29.686463 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-04-05 12:35:29.686478 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-04-05 12:35:29.686492 | orchestrator | 2025-04-05 12:35:29.686506 | orchestrator | 2025-04-05 12:35:29.686520 | orchestrator | 2025-04-05 12:35:29.686534 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:35:29.686548 | orchestrator | Saturday 05 April 2025 12:35:25 +0000 (0:00:16.768) 0:01:56.559 ******** 2025-04-05 12:35:29.686562 | orchestrator | =============================================================================== 2025-04-05 12:35:29.686600 | orchestrator | create openstack pool(s) ----------------------------------------------- 36.56s 2025-04-05 12:35:29.686615 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.77s 2025-04-05 12:35:29.686629 | orchestrator | generate keys ---------------------------------------------------------- 16.27s 2025-04-05 12:35:29.686643 | orchestrator | get keys from monitors -------------------------------------------------- 8.85s 2025-04-05 12:35:29.686657 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 2.66s 2025-04-05 12:35:29.686671 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.01s 2025-04-05 12:35:29.686685 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.66s 2025-04-05 12:35:29.686699 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.23s 2025-04-05 12:35:29.686713 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 1.21s 2025-04-05 12:35:29.686770 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 1.12s 2025-04-05 12:35:29.686787 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.07s 2025-04-05 12:35:29.686804 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.04s 2025-04-05 12:35:29.686820 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.86s 2025-04-05 12:35:29.686836 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.76s 2025-04-05 12:35:29.686852 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4 ---- 0.76s 2025-04-05 12:35:29.686867 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.76s 2025-04-05 12:35:29.686883 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.75s 2025-04-05 12:35:29.686898 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.68s 2025-04-05 12:35:29.686913 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.68s 2025-04-05 12:35:29.686928 | orchestrator | Include tasks from the ceph-osd role ------------------------------------ 0.66s 2025-04-05 12:35:29.686944 | orchestrator | 2025-04-05 12:35:26 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:35:29.686961 | orchestrator | 2025-04-05 12:35:26 | INFO  | Task a1518ff3-6094-416a-bf84-c922ebf38aed is in state STARTED 2025-04-05 12:35:29.686975 | orchestrator | 2025-04-05 12:35:26 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:35:29.687007 | orchestrator | 2025-04-05 12:35:29 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state STARTED 2025-04-05 12:35:29.687150 | orchestrator | 2025-04-05 12:35:29 | INFO  | Task a1518ff3-6094-416a-bf84-c922ebf38aed is in state STARTED 2025-04-05 12:35:32.732918 | orchestrator | 2025-04-05 12:35:29 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:35:32.733043 | orchestrator | 2025-04-05 12:35:32 | INFO  | Task b5351d7b-4854-4b60-bcdc-4f43dd7a5ddb is in state SUCCESS 2025-04-05 12:35:32.733988 | orchestrator | 2025-04-05 12:35:32.734399 | orchestrator | 2025-04-05 12:35:32.734418 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:35:32.734433 | orchestrator | 2025-04-05 12:35:32.734447 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:35:32.734461 | orchestrator | Saturday 05 April 2025 12:33:14 +0000 (0:00:00.281) 0:00:00.281 ******** 2025-04-05 12:35:32.734476 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:35:32.734492 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:35:32.734506 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:35:32.734521 | orchestrator | 2025-04-05 12:35:32.734535 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:35:32.734549 | orchestrator | Saturday 05 April 2025 12:33:14 +0000 (0:00:00.345) 0:00:00.626 ******** 2025-04-05 12:35:32.734563 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-04-05 12:35:32.734603 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-04-05 12:35:32.734618 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-04-05 12:35:32.734632 | orchestrator | 2025-04-05 12:35:32.734646 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-04-05 12:35:32.734660 | orchestrator | 2025-04-05 12:35:32.734674 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-05 12:35:32.734688 | orchestrator | Saturday 05 April 2025 12:33:15 +0000 (0:00:00.421) 0:00:01.047 ******** 2025-04-05 12:35:32.734702 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:35:32.734718 | orchestrator | 2025-04-05 12:35:32.734772 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-04-05 12:35:32.734789 | orchestrator | Saturday 05 April 2025 12:33:15 +0000 (0:00:00.557) 0:00:01.605 ******** 2025-04-05 12:35:32.734808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:35:32.734828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:35:32.734902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:35:32.734933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-05 12:35:32.734950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-05 12:35:32.734964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-05 12:35:32.734982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-05 12:35:32.734999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-05 12:35:32.735014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-05 12:35:32.735031 | orchestrator | 2025-04-05 12:35:32.735046 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-04-05 12:35:32.735062 | orchestrator | Saturday 05 April 2025 12:33:17 +0000 (0:00:01.988) 0:00:03.594 ******** 2025-04-05 12:35:32.735090 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-04-05 12:35:32.735107 | orchestrator | 2025-04-05 12:35:32.735123 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-04-05 12:35:32.735138 | orchestrator | Saturday 05 April 2025 12:33:18 +0000 (0:00:00.515) 0:00:04.110 ******** 2025-04-05 12:35:32.735154 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:35:32.735170 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:35:32.735186 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:35:32.735202 | orchestrator | 2025-04-05 12:35:32.735218 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-04-05 12:35:32.735233 | orchestrator | Saturday 05 April 2025 12:33:18 +0000 (0:00:00.377) 0:00:04.487 ******** 2025-04-05 12:35:32.735249 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-05 12:35:32.735265 | orchestrator | 2025-04-05 12:35:32.735280 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-05 12:35:32.735296 | orchestrator | Saturday 05 April 2025 12:33:19 +0000 (0:00:00.388) 0:00:04.876 ******** 2025-04-05 12:35:32.735311 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:35:32.735327 | orchestrator | 2025-04-05 12:35:32.735341 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-04-05 12:35:32.735355 | orchestrator | Saturday 05 April 2025 12:33:19 +0000 (0:00:00.571) 0:00:05.447 ******** 2025-04-05 12:35:32.735370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:35:32.735386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:35:32.735401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:35:32.735434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-05 12:35:32.735450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-05 12:35:32.735465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-05 12:35:32.735480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-05 12:35:32.735494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-05 12:35:32.735509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-05 12:35:32.735530 | orchestrator | 2025-04-05 12:35:32.735545 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-04-05 12:35:32.735565 | orchestrator | Saturday 05 April 2025 12:33:22 +0000 (0:00:02.719) 0:00:08.167 ******** 2025-04-05 12:35:32.735588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-04-05 12:35:32.735604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-05 12:35:32.735619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-05 12:35:32.735634 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:35:32.735651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-04-05 12:35:32.735672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-05 12:35:32.735694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-05 12:35:32.735714 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:35:32.735729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-04-05 12:35:32.735769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-05 12:35:32.735784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-05 12:35:32.735799 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:35:32.735813 | orchestrator | 2025-04-05 12:35:32.735828 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-04-05 12:35:32.735842 | orchestrator | Saturday 05 April 2025 12:33:23 +0000 (0:00:00.707) 0:00:08.874 ******** 2025-04-05 12:35:32.735856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-04-05 12:35:32.735883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-05 12:35:32.735904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-05 12:35:32.735919 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:35:32.735934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-04-05 12:35:32.735949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-05 12:35:32.735963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-05 12:35:32.735994 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:35:32.736014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-04-05 12:35:32.736037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-05 12:35:32.736052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-05 12:35:32.736066 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:35:32.736081 | orchestrator | 2025-04-05 12:35:32.736095 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-04-05 12:35:32.736109 | orchestrator | Saturday 05 April 2025 12:33:24 +0000 (0:00:00.993) 0:00:09.868 ******** 2025-04-05 12:35:32.736123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:35:32.736144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:35:32.736172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:35:32.736188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-05 12:35:32.736202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-05 12:35:32.736217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-05 12:35:32.736232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-05 12:35:32.736253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-05 12:35:32.736268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-05 12:35:32.736282 | orchestrator | 2025-04-05 12:35:32.736297 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-04-05 12:35:32.736311 | orchestrator | Saturday 05 April 2025 12:33:27 +0000 (0:00:03.029) 0:00:12.898 ******** 2025-04-05 12:35:32.736336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:35:32.736352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-05 12:35:32.736367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:35:32.736393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-05 12:35:32.736414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:35:32.736429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-05 12:35:32.736444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-05 12:35:32.736459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-05 12:35:32.736480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-05 12:35:32.736494 | orchestrator | 2025-04-05 12:35:32.736509 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-04-05 12:35:32.736523 | orchestrator | Saturday 05 April 2025 12:33:32 +0000 (0:00:05.217) 0:00:18.115 ******** 2025-04-05 12:35:32.736537 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:35:32.736551 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:35:32.736565 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:35:32.736579 | orchestrator | 2025-04-05 12:35:32.736593 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-04-05 12:35:32.736606 | orchestrator | Saturday 05 April 2025 12:33:33 +0000 (0:00:01.554) 0:00:19.670 ******** 2025-04-05 12:35:32.736620 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:35:32.736634 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:35:32.736648 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:35:32.736667 | orchestrator | 2025-04-05 12:35:32.736681 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-04-05 12:35:32.736695 | orchestrator | Saturday 05 April 2025 12:33:34 +0000 (0:00:00.682) 0:00:20.352 ******** 2025-04-05 12:35:32.736709 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:35:32.736722 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:35:32.736789 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:35:32.736806 | orchestrator | 2025-04-05 12:35:32.736820 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-04-05 12:35:32.736833 | orchestrator | Saturday 05 April 2025 12:33:34 +0000 (0:00:00.302) 0:00:20.655 ******** 2025-04-05 12:35:32.736847 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:35:32.736861 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:35:32.736875 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:35:32.736888 | orchestrator | 2025-04-05 12:35:32.736903 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-04-05 12:35:32.736916 | orchestrator | Saturday 05 April 2025 12:33:35 +0000 (0:00:00.305) 0:00:20.961 ******** 2025-04-05 12:35:32.736943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:35:32.736958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-05 12:35:32.736977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:35:32.736995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-05 12:35:32.737009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:35:32.737028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-05 12:35:32.737041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-05 12:35:32.737060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-05 12:35:32.737073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-05 12:35:32.737086 | orchestrator | 2025-04-05 12:35:32.737099 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-05 12:35:32.737111 | orchestrator | Saturday 05 April 2025 12:33:37 +0000 (0:00:02.207) 0:00:23.168 ******** 2025-04-05 12:35:32.737124 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:35:32.737136 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:35:32.737148 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:35:32.737161 | orchestrator | 2025-04-05 12:35:32.737173 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-04-05 12:35:32.737185 | orchestrator | Saturday 05 April 2025 12:33:37 +0000 (0:00:00.333) 0:00:23.502 ******** 2025-04-05 12:35:32.737198 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-04-05 12:35:32.737210 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-04-05 12:35:32.737223 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-04-05 12:35:32.737235 | orchestrator | 2025-04-05 12:35:32.737247 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-04-05 12:35:32.737259 | orchestrator | Saturday 05 April 2025 12:33:39 +0000 (0:00:02.118) 0:00:25.620 ******** 2025-04-05 12:35:32.737272 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-05 12:35:32.737284 | orchestrator | 2025-04-05 12:35:32.737296 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-04-05 12:35:32.737308 | orchestrator | Saturday 05 April 2025 12:33:40 +0000 (0:00:00.663) 0:00:26.284 ******** 2025-04-05 12:35:32.737320 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:35:32.737332 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:35:32.737344 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:35:32.737357 | orchestrator | 2025-04-05 12:35:32.737369 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-04-05 12:35:32.737381 | orchestrator | Saturday 05 April 2025 12:33:41 +0000 (0:00:01.446) 0:00:27.731 ******** 2025-04-05 12:35:32.737393 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-05 12:35:32.737405 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-05 12:35:32.737417 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-05 12:35:32.737429 | orchestrator | 2025-04-05 12:35:32.737447 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-04-05 12:35:32.737459 | orchestrator | Saturday 05 April 2025 12:33:43 +0000 (0:00:01.069) 0:00:28.801 ******** 2025-04-05 12:35:32.737471 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:35:32.737484 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:35:32.737496 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:35:32.737508 | orchestrator | 2025-04-05 12:35:32.737525 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-04-05 12:35:32.737538 | orchestrator | Saturday 05 April 2025 12:33:43 +0000 (0:00:00.226) 0:00:29.027 ******** 2025-04-05 12:35:32.737551 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-04-05 12:35:32.737571 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-04-05 12:35:32.737584 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-04-05 12:35:32.737596 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-04-05 12:35:32.737609 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-04-05 12:35:32.737621 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-04-05 12:35:32.737633 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-04-05 12:35:32.737646 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-04-05 12:35:32.737658 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-04-05 12:35:32.737670 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-04-05 12:35:32.737682 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-04-05 12:35:32.737695 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-04-05 12:35:32.737707 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-04-05 12:35:32.737719 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-04-05 12:35:32.737745 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-04-05 12:35:32.737759 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-05 12:35:32.737771 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-05 12:35:32.737784 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-05 12:35:32.737796 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-05 12:35:32.737809 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-05 12:35:32.737821 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-05 12:35:32.737833 | orchestrator | 2025-04-05 12:35:32.737846 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-04-05 12:35:32.737858 | orchestrator | Saturday 05 April 2025 12:33:52 +0000 (0:00:09.582) 0:00:38.610 ******** 2025-04-05 12:35:32.737870 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-05 12:35:32.737882 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-05 12:35:32.737894 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-05 12:35:32.737907 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-05 12:35:32.737925 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-05 12:35:32.737937 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-05 12:35:32.737950 | orchestrator | 2025-04-05 12:35:32.737962 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-04-05 12:35:32.737974 | orchestrator | Saturday 05 April 2025 12:33:56 +0000 (0:00:03.650) 0:00:42.260 ******** 2025-04-05 12:35:32.737997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:35:32.738012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:35:32.738056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-05 12:35:32.738070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-05 12:35:32.738095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-05 12:35:32.738108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-05 12:35:32.738128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-05 12:35:32.738142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-05 12:35:32.738155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-05 12:35:32.738168 | orchestrator | 2025-04-05 12:35:32.738180 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-05 12:35:32.738193 | orchestrator | Saturday 05 April 2025 12:33:58 +0000 (0:00:02.327) 0:00:44.588 ******** 2025-04-05 12:35:32.738205 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:35:32.738217 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:35:32.738230 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:35:32.738242 | orchestrator | 2025-04-05 12:35:32.738254 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-04-05 12:35:32.738267 | orchestrator | Saturday 05 April 2025 12:33:59 +0000 (0:00:00.424) 0:00:45.013 ******** 2025-04-05 12:35:32.738279 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:35:32.738297 | orchestrator | 2025-04-05 12:35:32.738310 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-04-05 12:35:32.738322 | orchestrator | Saturday 05 April 2025 12:34:01 +0000 (0:00:02.254) 0:00:47.268 ******** 2025-04-05 12:35:32.738334 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:35:32.738347 | orchestrator | 2025-04-05 12:35:32.738359 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-04-05 12:35:32.738371 | orchestrator | Saturday 05 April 2025 12:34:04 +0000 (0:00:02.733) 0:00:50.001 ******** 2025-04-05 12:35:32.738383 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:35:32.738395 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:35:32.738408 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:35:32.738420 | orchestrator | 2025-04-05 12:35:32.738432 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-04-05 12:35:32.738444 | orchestrator | Saturday 05 April 2025 12:34:04 +0000 (0:00:00.718) 0:00:50.719 ******** 2025-04-05 12:35:32.738457 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:35:32.738469 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:35:32.738481 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:35:32.738493 | orchestrator | 2025-04-05 12:35:32.738505 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-04-05 12:35:32.738518 | orchestrator | Saturday 05 April 2025 12:34:05 +0000 (0:00:00.505) 0:00:51.225 ******** 2025-04-05 12:35:32.738530 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:35:32.738542 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:35:32.738554 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:35:32.738566 | orchestrator | 2025-04-05 12:35:32.738578 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-04-05 12:35:32.738591 | orchestrator | Saturday 05 April 2025 12:34:05 +0000 (0:00:00.491) 0:00:51.716 ******** 2025-04-05 12:35:32.738603 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:35:32.738615 | orchestrator | 2025-04-05 12:35:32.738627 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-04-05 12:35:32.738639 | orchestrator | Saturday 05 April 2025 12:34:16 +0000 (0:00:10.836) 0:01:02.553 ******** 2025-04-05 12:35:32.738652 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:35:32.738664 | orchestrator | 2025-04-05 12:35:32.738680 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-04-05 12:35:32.738693 | orchestrator | Saturday 05 April 2025 12:34:25 +0000 (0:00:08.559) 0:01:11.112 ******** 2025-04-05 12:35:32.738705 | orchestrator | 2025-04-05 12:35:32.738718 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-04-05 12:35:32.738730 | orchestrator | Saturday 05 April 2025 12:34:25 +0000 (0:00:00.268) 0:01:11.380 ******** 2025-04-05 12:35:32.738759 | orchestrator | 2025-04-05 12:35:32.738771 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-04-05 12:35:32.738784 | orchestrator | Saturday 05 April 2025 12:34:25 +0000 (0:00:00.052) 0:01:11.433 ******** 2025-04-05 12:35:32.738796 | orchestrator | 2025-04-05 12:35:32.738808 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-04-05 12:35:32.738825 | orchestrator | Saturday 05 April 2025 12:34:25 +0000 (0:00:00.055) 0:01:11.488 ******** 2025-04-05 12:35:32.738838 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:35:32.738851 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:35:32.738863 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:35:32.738875 | orchestrator | 2025-04-05 12:35:32.738888 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-04-05 12:35:32.738900 | orchestrator | Saturday 05 April 2025 12:34:38 +0000 (0:00:13.025) 0:01:24.513 ******** 2025-04-05 12:35:32.738912 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:35:32.738924 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:35:32.738937 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:35:32.738949 | orchestrator | 2025-04-05 12:35:32.738961 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-04-05 12:35:32.738980 | orchestrator | Saturday 05 April 2025 12:34:42 +0000 (0:00:04.129) 0:01:28.643 ******** 2025-04-05 12:35:32.738993 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:35:32.739005 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:35:32.739017 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:35:32.739030 | orchestrator | 2025-04-05 12:35:32.739042 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-05 12:35:32.739054 | orchestrator | Saturday 05 April 2025 12:34:50 +0000 (0:00:07.502) 0:01:36.146 ******** 2025-04-05 12:35:32.739066 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:35:32.739079 | orchestrator | 2025-04-05 12:35:32.739091 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-04-05 12:35:32.739103 | orchestrator | Saturday 05 April 2025 12:34:51 +0000 (0:00:00.727) 0:01:36.874 ******** 2025-04-05 12:35:32.739115 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:35:32.739127 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:35:32.739140 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:35:32.739152 | orchestrator | 2025-04-05 12:35:32.739164 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-04-05 12:35:32.739176 | orchestrator | Saturday 05 April 2025 12:34:51 +0000 (0:00:00.659) 0:01:37.533 ******** 2025-04-05 12:35:32.739189 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:35:32.739201 | orchestrator | 2025-04-05 12:35:32.739213 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-04-05 12:35:32.739226 | orchestrator | Saturday 05 April 2025 12:34:53 +0000 (0:00:01.579) 0:01:39.113 ******** 2025-04-05 12:35:32.739238 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-04-05 12:35:32.739250 | orchestrator | 2025-04-05 12:35:32.739263 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-04-05 12:35:32.739275 | orchestrator | Saturday 05 April 2025 12:35:01 +0000 (0:00:08.215) 0:01:47.329 ******** 2025-04-05 12:35:32.739292 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-04-05 12:35:32.739305 | orchestrator | 2025-04-05 12:35:32.739317 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-04-05 12:35:32.739329 | orchestrator | Saturday 05 April 2025 12:35:21 +0000 (0:00:19.982) 0:02:07.311 ******** 2025-04-05 12:35:32.739342 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-04-05 12:35:32.739354 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-04-05 12:35:32.739366 | orchestrator | 2025-04-05 12:35:32.739379 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-04-05 12:35:32.739391 | orchestrator | Saturday 05 April 2025 12:35:27 +0000 (0:00:06.226) 0:02:13.538 ******** 2025-04-05 12:35:32.739403 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:35:32.739416 | orchestrator | 2025-04-05 12:35:32.739428 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-04-05 12:35:32.739440 | orchestrator | Saturday 05 April 2025 12:35:27 +0000 (0:00:00.105) 0:02:13.643 ******** 2025-04-05 12:35:32.739453 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:35:32.739465 | orchestrator | 2025-04-05 12:35:32.739477 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-04-05 12:35:32.739490 | orchestrator | Saturday 05 April 2025 12:35:28 +0000 (0:00:00.179) 0:02:13.822 ******** 2025-04-05 12:35:32.739502 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:35:32.739519 | orchestrator | 2025-04-05 12:35:32.739531 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-04-05 12:35:32.739544 | orchestrator | Saturday 05 April 2025 12:35:28 +0000 (0:00:00.104) 0:02:13.926 ******** 2025-04-05 12:35:32.739556 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:35:32.739569 | orchestrator | 2025-04-05 12:35:32.739586 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-04-05 12:35:32.739598 | orchestrator | Saturday 05 April 2025 12:35:28 +0000 (0:00:00.324) 0:02:14.251 ******** 2025-04-05 12:35:32.739620 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:35:32.739633 | orchestrator | 2025-04-05 12:35:32.739646 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-05 12:35:32.739658 | orchestrator | Saturday 05 April 2025 12:35:31 +0000 (0:00:03.352) 0:02:17.604 ******** 2025-04-05 12:35:32.739671 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:35:32.739683 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:35:32.739695 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:35:32.739708 | orchestrator | 2025-04-05 12:35:32.739720 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:35:32.739777 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-05 12:35:32.739793 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-04-05 12:35:32.739811 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-04-05 12:35:35.762816 | orchestrator | 2025-04-05 12:35:35.762932 | orchestrator | 2025-04-05 12:35:35.762951 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:35:35.762967 | orchestrator | Saturday 05 April 2025 12:35:32 +0000 (0:00:00.446) 0:02:18.051 ******** 2025-04-05 12:35:35.762981 | orchestrator | =============================================================================== 2025-04-05 12:35:35.762995 | orchestrator | service-ks-register : keystone | Creating services --------------------- 19.98s 2025-04-05 12:35:35.763009 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 13.03s 2025-04-05 12:35:35.763023 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 10.84s 2025-04-05 12:35:35.763037 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.58s 2025-04-05 12:35:35.763051 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 8.56s 2025-04-05 12:35:35.763065 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 8.22s 2025-04-05 12:35:35.763079 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.50s 2025-04-05 12:35:35.763092 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.23s 2025-04-05 12:35:35.763106 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.22s 2025-04-05 12:35:35.763120 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.13s 2025-04-05 12:35:35.763133 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.65s 2025-04-05 12:35:35.763147 | orchestrator | keystone : Creating default user role ----------------------------------- 3.35s 2025-04-05 12:35:35.763257 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.03s 2025-04-05 12:35:35.763278 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.73s 2025-04-05 12:35:35.763293 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 2.72s 2025-04-05 12:35:35.763306 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.33s 2025-04-05 12:35:35.763320 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.25s 2025-04-05 12:35:35.763334 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.21s 2025-04-05 12:35:35.763348 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.12s 2025-04-05 12:35:35.763361 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.99s 2025-04-05 12:35:35.763376 | orchestrator | 2025-04-05 12:35:32 | INFO  | Task a1518ff3-6094-416a-bf84-c922ebf38aed is in state STARTED 2025-04-05 12:35:35.763391 | orchestrator | 2025-04-05 12:35:32 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:35:35.763462 | orchestrator | 2025-04-05 12:35:35 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:35:35.763862 | orchestrator | 2025-04-05 12:35:35 | INFO  | Task a1518ff3-6094-416a-bf84-c922ebf38aed is in state STARTED 2025-04-05 12:35:35.763889 | orchestrator | 2025-04-05 12:35:35 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:35:35.763905 | orchestrator | 2025-04-05 12:35:35 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:35:35.763927 | orchestrator | 2025-04-05 12:35:35 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:35:35.764567 | orchestrator | 2025-04-05 12:35:35 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:35:38.804153 | orchestrator | 2025-04-05 12:35:38 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:35:38.805206 | orchestrator | 2025-04-05 12:35:38 | INFO  | Task a1518ff3-6094-416a-bf84-c922ebf38aed is in state STARTED 2025-04-05 12:35:38.805243 | orchestrator | 2025-04-05 12:35:38 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:35:38.805866 | orchestrator | 2025-04-05 12:35:38 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:35:38.806641 | orchestrator | 2025-04-05 12:35:38 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:35:41.846108 | orchestrator | 2025-04-05 12:35:38 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:35:41.846239 | orchestrator | 2025-04-05 12:35:41 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:35:41.849907 | orchestrator | 2025-04-05 12:35:41 | INFO  | Task a1518ff3-6094-416a-bf84-c922ebf38aed is in state STARTED 2025-04-05 12:35:41.851001 | orchestrator | 2025-04-05 12:35:41 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:35:41.852752 | orchestrator | 2025-04-05 12:35:41 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:35:41.854487 | orchestrator | 2025-04-05 12:35:41 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:35:44.898677 | orchestrator | 2025-04-05 12:35:41 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:35:44.898824 | orchestrator | 2025-04-05 12:35:44 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:35:44.900151 | orchestrator | 2025-04-05 12:35:44 | INFO  | Task a1518ff3-6094-416a-bf84-c922ebf38aed is in state STARTED 2025-04-05 12:35:44.901486 | orchestrator | 2025-04-05 12:35:44 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:35:44.903254 | orchestrator | 2025-04-05 12:35:44 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:35:44.904190 | orchestrator | 2025-04-05 12:35:44 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:35:44.904388 | orchestrator | 2025-04-05 12:35:44 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:35:47.952680 | orchestrator | 2025-04-05 12:35:47 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:35:47.953685 | orchestrator | 2025-04-05 12:35:47 | INFO  | Task a1518ff3-6094-416a-bf84-c922ebf38aed is in state STARTED 2025-04-05 12:35:47.956051 | orchestrator | 2025-04-05 12:35:47 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:35:47.957069 | orchestrator | 2025-04-05 12:35:47 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:35:47.959630 | orchestrator | 2025-04-05 12:35:47 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:35:51.002855 | orchestrator | 2025-04-05 12:35:47 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:35:51.003003 | orchestrator | 2025-04-05 12:35:51 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:35:51.003634 | orchestrator | 2025-04-05 12:35:51 | INFO  | Task a1518ff3-6094-416a-bf84-c922ebf38aed is in state STARTED 2025-04-05 12:35:51.005185 | orchestrator | 2025-04-05 12:35:51 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:35:51.006702 | orchestrator | 2025-04-05 12:35:51 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:35:51.008111 | orchestrator | 2025-04-05 12:35:51 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:35:54.058627 | orchestrator | 2025-04-05 12:35:51 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:35:54.058814 | orchestrator | 2025-04-05 12:35:54 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:35:54.059657 | orchestrator | 2025-04-05 12:35:54 | INFO  | Task e6eb77c3-5af9-4753-8927-0952d272f286 is in state STARTED 2025-04-05 12:35:54.059690 | orchestrator | 2025-04-05 12:35:54 | INFO  | Task a1518ff3-6094-416a-bf84-c922ebf38aed is in state SUCCESS 2025-04-05 12:35:54.060395 | orchestrator | 2025-04-05 12:35:54 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:35:54.062103 | orchestrator | 2025-04-05 12:35:54 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:35:54.062874 | orchestrator | 2025-04-05 12:35:54 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:35:54.062910 | orchestrator | 2025-04-05 12:35:54 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:35:57.119242 | orchestrator | 2025-04-05 12:35:57 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:35:57.121727 | orchestrator | 2025-04-05 12:35:57 | INFO  | Task e6eb77c3-5af9-4753-8927-0952d272f286 is in state STARTED 2025-04-05 12:35:57.123942 | orchestrator | 2025-04-05 12:35:57 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:35:57.125872 | orchestrator | 2025-04-05 12:35:57 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:35:57.127020 | orchestrator | 2025-04-05 12:35:57 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:35:57.127307 | orchestrator | 2025-04-05 12:35:57 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:00.176062 | orchestrator | 2025-04-05 12:36:00 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:00.178277 | orchestrator | 2025-04-05 12:36:00 | INFO  | Task e6eb77c3-5af9-4753-8927-0952d272f286 is in state STARTED 2025-04-05 12:36:00.179672 | orchestrator | 2025-04-05 12:36:00 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:00.182668 | orchestrator | 2025-04-05 12:36:00 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:00.184271 | orchestrator | 2025-04-05 12:36:00 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:03.241953 | orchestrator | 2025-04-05 12:36:00 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:03.242127 | orchestrator | 2025-04-05 12:36:03 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:03.243410 | orchestrator | 2025-04-05 12:36:03 | INFO  | Task e6eb77c3-5af9-4753-8927-0952d272f286 is in state STARTED 2025-04-05 12:36:03.244681 | orchestrator | 2025-04-05 12:36:03 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:03.246076 | orchestrator | 2025-04-05 12:36:03 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:03.247353 | orchestrator | 2025-04-05 12:36:03 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:03.247582 | orchestrator | 2025-04-05 12:36:03 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:06.292109 | orchestrator | 2025-04-05 12:36:06 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:06.293166 | orchestrator | 2025-04-05 12:36:06 | INFO  | Task e6eb77c3-5af9-4753-8927-0952d272f286 is in state STARTED 2025-04-05 12:36:06.293884 | orchestrator | 2025-04-05 12:36:06 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:06.294830 | orchestrator | 2025-04-05 12:36:06 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:06.296016 | orchestrator | 2025-04-05 12:36:06 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:09.339004 | orchestrator | 2025-04-05 12:36:06 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:09.339123 | orchestrator | 2025-04-05 12:36:09 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:09.340032 | orchestrator | 2025-04-05 12:36:09 | INFO  | Task e6eb77c3-5af9-4753-8927-0952d272f286 is in state STARTED 2025-04-05 12:36:09.340067 | orchestrator | 2025-04-05 12:36:09 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:09.340088 | orchestrator | 2025-04-05 12:36:09 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:09.341618 | orchestrator | 2025-04-05 12:36:09 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:09.341882 | orchestrator | 2025-04-05 12:36:09 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:12.381438 | orchestrator | 2025-04-05 12:36:12 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:12.381970 | orchestrator | 2025-04-05 12:36:12 | INFO  | Task e6eb77c3-5af9-4753-8927-0952d272f286 is in state STARTED 2025-04-05 12:36:12.382063 | orchestrator | 2025-04-05 12:36:12 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:12.383209 | orchestrator | 2025-04-05 12:36:12 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:12.383920 | orchestrator | 2025-04-05 12:36:12 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:15.420565 | orchestrator | 2025-04-05 12:36:12 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:15.420716 | orchestrator | 2025-04-05 12:36:15 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:15.422509 | orchestrator | 2025-04-05 12:36:15 | INFO  | Task e6eb77c3-5af9-4753-8927-0952d272f286 is in state STARTED 2025-04-05 12:36:15.426086 | orchestrator | 2025-04-05 12:36:15 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:15.426958 | orchestrator | 2025-04-05 12:36:15 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:15.428673 | orchestrator | 2025-04-05 12:36:15 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:18.455672 | orchestrator | 2025-04-05 12:36:15 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:18.455852 | orchestrator | 2025-04-05 12:36:18 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:18.457148 | orchestrator | 2025-04-05 12:36:18 | INFO  | Task e6eb77c3-5af9-4753-8927-0952d272f286 is in state STARTED 2025-04-05 12:36:18.457178 | orchestrator | 2025-04-05 12:36:18 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:18.458555 | orchestrator | 2025-04-05 12:36:18 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:18.459171 | orchestrator | 2025-04-05 12:36:18 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:21.496448 | orchestrator | 2025-04-05 12:36:18 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:21.496599 | orchestrator | 2025-04-05 12:36:21 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:21.500060 | orchestrator | 2025-04-05 12:36:21 | INFO  | Task e6eb77c3-5af9-4753-8927-0952d272f286 is in state STARTED 2025-04-05 12:36:21.501105 | orchestrator | 2025-04-05 12:36:21 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:21.503297 | orchestrator | 2025-04-05 12:36:21 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:21.504172 | orchestrator | 2025-04-05 12:36:21 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:24.536150 | orchestrator | 2025-04-05 12:36:21 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:24.536286 | orchestrator | 2025-04-05 12:36:24 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:24.538063 | orchestrator | 2025-04-05 12:36:24 | INFO  | Task e6eb77c3-5af9-4753-8927-0952d272f286 is in state STARTED 2025-04-05 12:36:24.539542 | orchestrator | 2025-04-05 12:36:24 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:24.540491 | orchestrator | 2025-04-05 12:36:24 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:24.540994 | orchestrator | 2025-04-05 12:36:24 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:24.541190 | orchestrator | 2025-04-05 12:36:24 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:27.570976 | orchestrator | 2025-04-05 12:36:27 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:27.571998 | orchestrator | 2025-04-05 12:36:27 | INFO  | Task e6eb77c3-5af9-4753-8927-0952d272f286 is in state STARTED 2025-04-05 12:36:27.575919 | orchestrator | 2025-04-05 12:36:27 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:27.576620 | orchestrator | 2025-04-05 12:36:27 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:27.577391 | orchestrator | 2025-04-05 12:36:27 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:30.613443 | orchestrator | 2025-04-05 12:36:27 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:30.613575 | orchestrator | 2025-04-05 12:36:30 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:30.614263 | orchestrator | 2025-04-05 12:36:30 | INFO  | Task e6eb77c3-5af9-4753-8927-0952d272f286 is in state STARTED 2025-04-05 12:36:30.614299 | orchestrator | 2025-04-05 12:36:30 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:30.615175 | orchestrator | 2025-04-05 12:36:30 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:30.615918 | orchestrator | 2025-04-05 12:36:30 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:33.641462 | orchestrator | 2025-04-05 12:36:30 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:33.641603 | orchestrator | 2025-04-05 12:36:33 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:33.642477 | orchestrator | 2025-04-05 12:36:33 | INFO  | Task e6eb77c3-5af9-4753-8927-0952d272f286 is in state STARTED 2025-04-05 12:36:33.645676 | orchestrator | 2025-04-05 12:36:33 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:33.645724 | orchestrator | 2025-04-05 12:36:33 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:36.673973 | orchestrator | 2025-04-05 12:36:33 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:36.674133 | orchestrator | 2025-04-05 12:36:33 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:36.674168 | orchestrator | 2025-04-05 12:36:36 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:36.674822 | orchestrator | 2025-04-05 12:36:36 | INFO  | Task e6eb77c3-5af9-4753-8927-0952d272f286 is in state STARTED 2025-04-05 12:36:36.675387 | orchestrator | 2025-04-05 12:36:36 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:36.676613 | orchestrator | 2025-04-05 12:36:36 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:36.677068 | orchestrator | 2025-04-05 12:36:36 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:39.712749 | orchestrator | 2025-04-05 12:36:36 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:39.712862 | orchestrator | 2025-04-05 12:36:39 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:39.713868 | orchestrator | 2025-04-05 12:36:39 | INFO  | Task e6eb77c3-5af9-4753-8927-0952d272f286 is in state STARTED 2025-04-05 12:36:39.713969 | orchestrator | 2025-04-05 12:36:39 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:39.713987 | orchestrator | 2025-04-05 12:36:39 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:39.714002 | orchestrator | 2025-04-05 12:36:39 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:39.714077 | orchestrator | 2025-04-05 12:36:39 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:42.765166 | orchestrator | 2025-04-05 12:36:42 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:42.765535 | orchestrator | 2025-04-05 12:36:42 | INFO  | Task e6eb77c3-5af9-4753-8927-0952d272f286 is in state SUCCESS 2025-04-05 12:36:42.765934 | orchestrator | 2025-04-05 12:36:42.765966 | orchestrator | 2025-04-05 12:36:42.765980 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-04-05 12:36:42.765996 | orchestrator | 2025-04-05 12:36:42.766010 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-04-05 12:36:42.766070 | orchestrator | Saturday 05 April 2025 12:35:29 +0000 (0:00:00.146) 0:00:00.146 ******** 2025-04-05 12:36:42.766084 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-04-05 12:36:42.766100 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-04-05 12:36:42.766114 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-04-05 12:36:42.766129 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-04-05 12:36:42.766166 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-04-05 12:36:42.766180 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-04-05 12:36:42.766208 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-04-05 12:36:42.766223 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-04-05 12:36:42.766237 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-04-05 12:36:42.766251 | orchestrator | 2025-04-05 12:36:42.766265 | orchestrator | TASK [Create share directory] ************************************************** 2025-04-05 12:36:42.766284 | orchestrator | Saturday 05 April 2025 12:35:32 +0000 (0:00:03.853) 0:00:04.000 ******** 2025-04-05 12:36:42.766299 | orchestrator | changed: [testbed-manager -> localhost] 2025-04-05 12:36:42.766314 | orchestrator | 2025-04-05 12:36:42.766328 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-04-05 12:36:42.766341 | orchestrator | Saturday 05 April 2025 12:35:33 +0000 (0:00:00.889) 0:00:04.889 ******** 2025-04-05 12:36:42.766355 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-04-05 12:36:42.766369 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-05 12:36:42.766383 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-05 12:36:42.766397 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-04-05 12:36:42.766410 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-05 12:36:42.766424 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-04-05 12:36:42.766437 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-04-05 12:36:42.766451 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-04-05 12:36:42.766464 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-04-05 12:36:42.766478 | orchestrator | 2025-04-05 12:36:42.766492 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-04-05 12:36:42.766506 | orchestrator | Saturday 05 April 2025 12:35:45 +0000 (0:00:11.335) 0:00:16.225 ******** 2025-04-05 12:36:42.766520 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-04-05 12:36:42.766534 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-04-05 12:36:42.766549 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-04-05 12:36:42.766564 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-04-05 12:36:42.766580 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-04-05 12:36:42.766595 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-04-05 12:36:42.766611 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-04-05 12:36:42.766626 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-04-05 12:36:42.766641 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-04-05 12:36:42.766656 | orchestrator | 2025-04-05 12:36:42.766672 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:36:42.766687 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:36:42.766704 | orchestrator | 2025-04-05 12:36:42.766720 | orchestrator | 2025-04-05 12:36:42.766762 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:36:42.766778 | orchestrator | Saturday 05 April 2025 12:35:51 +0000 (0:00:06.083) 0:00:22.308 ******** 2025-04-05 12:36:42.766801 | orchestrator | =============================================================================== 2025-04-05 12:36:42.766817 | orchestrator | Write ceph keys to the share directory --------------------------------- 11.34s 2025-04-05 12:36:42.766832 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.08s 2025-04-05 12:36:42.766847 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.85s 2025-04-05 12:36:42.766864 | orchestrator | Create share directory -------------------------------------------------- 0.89s 2025-04-05 12:36:42.766879 | orchestrator | 2025-04-05 12:36:42.766895 | orchestrator | 2025-04-05 12:36:42.766909 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-04-05 12:36:42.766923 | orchestrator | 2025-04-05 12:36:42.766946 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-04-05 12:36:42.766961 | orchestrator | Saturday 05 April 2025 12:35:55 +0000 (0:00:00.231) 0:00:00.231 ******** 2025-04-05 12:36:42.766975 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-04-05 12:36:42.766991 | orchestrator | 2025-04-05 12:36:42.767004 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-04-05 12:36:42.767018 | orchestrator | Saturday 05 April 2025 12:35:55 +0000 (0:00:00.216) 0:00:00.447 ******** 2025-04-05 12:36:42.767031 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-04-05 12:36:42.767045 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-04-05 12:36:42.767059 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-04-05 12:36:42.767073 | orchestrator | 2025-04-05 12:36:42.767087 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-04-05 12:36:42.767100 | orchestrator | Saturday 05 April 2025 12:35:56 +0000 (0:00:01.168) 0:00:01.616 ******** 2025-04-05 12:36:42.767114 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-04-05 12:36:42.767128 | orchestrator | 2025-04-05 12:36:42.767142 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-04-05 12:36:42.767155 | orchestrator | Saturday 05 April 2025 12:35:58 +0000 (0:00:01.125) 0:00:02.742 ******** 2025-04-05 12:36:42.767169 | orchestrator | changed: [testbed-manager] 2025-04-05 12:36:42.767190 | orchestrator | 2025-04-05 12:36:42.767204 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-04-05 12:36:42.767217 | orchestrator | Saturday 05 April 2025 12:35:58 +0000 (0:00:00.906) 0:00:03.649 ******** 2025-04-05 12:36:42.767231 | orchestrator | changed: [testbed-manager] 2025-04-05 12:36:42.767250 | orchestrator | 2025-04-05 12:36:42.767264 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-04-05 12:36:42.767278 | orchestrator | Saturday 05 April 2025 12:35:59 +0000 (0:00:00.796) 0:00:04.446 ******** 2025-04-05 12:36:42.767292 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-04-05 12:36:42.767305 | orchestrator | ok: [testbed-manager] 2025-04-05 12:36:42.767320 | orchestrator | 2025-04-05 12:36:42.767333 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-04-05 12:36:42.767347 | orchestrator | Saturday 05 April 2025 12:36:35 +0000 (0:00:35.344) 0:00:39.790 ******** 2025-04-05 12:36:42.767361 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-04-05 12:36:42.767375 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-04-05 12:36:42.767389 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-04-05 12:36:42.767402 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-04-05 12:36:42.767416 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-04-05 12:36:42.767430 | orchestrator | 2025-04-05 12:36:42.767443 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-04-05 12:36:42.767457 | orchestrator | Saturday 05 April 2025 12:36:37 +0000 (0:00:02.846) 0:00:42.637 ******** 2025-04-05 12:36:42.767478 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-04-05 12:36:42.767491 | orchestrator | 2025-04-05 12:36:42.767505 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-04-05 12:36:42.767519 | orchestrator | Saturday 05 April 2025 12:36:38 +0000 (0:00:00.362) 0:00:42.999 ******** 2025-04-05 12:36:42.767533 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:36:42.767547 | orchestrator | 2025-04-05 12:36:42.767561 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-04-05 12:36:42.767575 | orchestrator | Saturday 05 April 2025 12:36:38 +0000 (0:00:00.116) 0:00:43.115 ******** 2025-04-05 12:36:42.767589 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:36:42.767603 | orchestrator | 2025-04-05 12:36:42.767617 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-04-05 12:36:42.767631 | orchestrator | Saturday 05 April 2025 12:36:38 +0000 (0:00:00.284) 0:00:43.400 ******** 2025-04-05 12:36:42.767644 | orchestrator | changed: [testbed-manager] 2025-04-05 12:36:42.767659 | orchestrator | 2025-04-05 12:36:42.767673 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-04-05 12:36:42.767686 | orchestrator | Saturday 05 April 2025 12:36:40 +0000 (0:00:01.456) 0:00:44.856 ******** 2025-04-05 12:36:42.767699 | orchestrator | changed: [testbed-manager] 2025-04-05 12:36:42.767713 | orchestrator | 2025-04-05 12:36:42.767743 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-04-05 12:36:42.767758 | orchestrator | Saturday 05 April 2025 12:36:40 +0000 (0:00:00.687) 0:00:45.543 ******** 2025-04-05 12:36:42.767772 | orchestrator | changed: [testbed-manager] 2025-04-05 12:36:42.767785 | orchestrator | 2025-04-05 12:36:42.767799 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-04-05 12:36:42.767813 | orchestrator | Saturday 05 April 2025 12:36:41 +0000 (0:00:00.577) 0:00:46.121 ******** 2025-04-05 12:36:42.767826 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-04-05 12:36:42.767840 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-04-05 12:36:42.767854 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-04-05 12:36:42.767868 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-04-05 12:36:42.767882 | orchestrator | 2025-04-05 12:36:42.767896 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:36:42.767909 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:36:42.767923 | orchestrator | 2025-04-05 12:36:42.767937 | orchestrator | 2025-04-05 12:36:42.767951 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:36:42.767964 | orchestrator | Saturday 05 April 2025 12:36:42 +0000 (0:00:01.081) 0:00:47.203 ******** 2025-04-05 12:36:42.767985 | orchestrator | =============================================================================== 2025-04-05 12:36:42.769004 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 35.34s 2025-04-05 12:36:42.769030 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 2.85s 2025-04-05 12:36:42.769044 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.46s 2025-04-05 12:36:42.769058 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.17s 2025-04-05 12:36:42.769072 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.13s 2025-04-05 12:36:42.769085 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.08s 2025-04-05 12:36:42.769107 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.91s 2025-04-05 12:36:42.769121 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.80s 2025-04-05 12:36:42.769135 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.69s 2025-04-05 12:36:42.769149 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.58s 2025-04-05 12:36:42.769174 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.36s 2025-04-05 12:36:42.769188 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.28s 2025-04-05 12:36:42.769202 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2025-04-05 12:36:42.769216 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-04-05 12:36:42.769235 | orchestrator | 2025-04-05 12:36:42 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:42.769466 | orchestrator | 2025-04-05 12:36:42 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:42.769994 | orchestrator | 2025-04-05 12:36:42 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:45.809569 | orchestrator | 2025-04-05 12:36:42 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:45.809705 | orchestrator | 2025-04-05 12:36:45 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:45.810148 | orchestrator | 2025-04-05 12:36:45 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:45.810181 | orchestrator | 2025-04-05 12:36:45 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:45.810799 | orchestrator | 2025-04-05 12:36:45 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:45.811342 | orchestrator | 2025-04-05 12:36:45 | INFO  | Task 44c3d2ce-3f17-4cb8-905c-1fe72912dee9 is in state STARTED 2025-04-05 12:36:48.844787 | orchestrator | 2025-04-05 12:36:45 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:48.844920 | orchestrator | 2025-04-05 12:36:48 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:48.846067 | orchestrator | 2025-04-05 12:36:48 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:48.846101 | orchestrator | 2025-04-05 12:36:48 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:48.847508 | orchestrator | 2025-04-05 12:36:48 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:48.848182 | orchestrator | 2025-04-05 12:36:48 | INFO  | Task 44c3d2ce-3f17-4cb8-905c-1fe72912dee9 is in state STARTED 2025-04-05 12:36:51.883966 | orchestrator | 2025-04-05 12:36:48 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:51.884095 | orchestrator | 2025-04-05 12:36:51 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:51.884631 | orchestrator | 2025-04-05 12:36:51 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:51.884650 | orchestrator | 2025-04-05 12:36:51 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:51.884661 | orchestrator | 2025-04-05 12:36:51 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:51.884677 | orchestrator | 2025-04-05 12:36:51 | INFO  | Task 44c3d2ce-3f17-4cb8-905c-1fe72912dee9 is in state STARTED 2025-04-05 12:36:54.916857 | orchestrator | 2025-04-05 12:36:51 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:54.916983 | orchestrator | 2025-04-05 12:36:54 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:54.917152 | orchestrator | 2025-04-05 12:36:54 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:54.917182 | orchestrator | 2025-04-05 12:36:54 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:54.918660 | orchestrator | 2025-04-05 12:36:54 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:54.919123 | orchestrator | 2025-04-05 12:36:54 | INFO  | Task 44c3d2ce-3f17-4cb8-905c-1fe72912dee9 is in state STARTED 2025-04-05 12:36:57.945384 | orchestrator | 2025-04-05 12:36:54 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:36:57.945521 | orchestrator | 2025-04-05 12:36:57 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:36:57.946010 | orchestrator | 2025-04-05 12:36:57 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:36:57.946295 | orchestrator | 2025-04-05 12:36:57 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:36:57.947861 | orchestrator | 2025-04-05 12:36:57 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:36:57.948609 | orchestrator | 2025-04-05 12:36:57 | INFO  | Task 44c3d2ce-3f17-4cb8-905c-1fe72912dee9 is in state STARTED 2025-04-05 12:36:57.949558 | orchestrator | 2025-04-05 12:36:57 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:00.995072 | orchestrator | 2025-04-05 12:37:00 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:37:00.995317 | orchestrator | 2025-04-05 12:37:00 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:37:00.996146 | orchestrator | 2025-04-05 12:37:00 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:37:00.996881 | orchestrator | 2025-04-05 12:37:00 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:37:00.997688 | orchestrator | 2025-04-05 12:37:00 | INFO  | Task 44c3d2ce-3f17-4cb8-905c-1fe72912dee9 is in state STARTED 2025-04-05 12:37:04.028827 | orchestrator | 2025-04-05 12:37:00 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:04.029051 | orchestrator | 2025-04-05 12:37:04 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:37:04.029538 | orchestrator | 2025-04-05 12:37:04 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:37:04.029568 | orchestrator | 2025-04-05 12:37:04 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:37:04.032305 | orchestrator | 2025-04-05 12:37:04 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:37:07.057884 | orchestrator | 2025-04-05 12:37:04 | INFO  | Task 44c3d2ce-3f17-4cb8-905c-1fe72912dee9 is in state STARTED 2025-04-05 12:37:07.057980 | orchestrator | 2025-04-05 12:37:04 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:07.058047 | orchestrator | 2025-04-05 12:37:07 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:37:07.059237 | orchestrator | 2025-04-05 12:37:07 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:37:07.060901 | orchestrator | 2025-04-05 12:37:07 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:37:07.062332 | orchestrator | 2025-04-05 12:37:07 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:37:07.063529 | orchestrator | 2025-04-05 12:37:07 | INFO  | Task 44c3d2ce-3f17-4cb8-905c-1fe72912dee9 is in state STARTED 2025-04-05 12:37:10.097183 | orchestrator | 2025-04-05 12:37:07 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:10.097310 | orchestrator | 2025-04-05 12:37:10 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:37:10.098624 | orchestrator | 2025-04-05 12:37:10 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:37:10.098679 | orchestrator | 2025-04-05 12:37:10 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:37:10.100148 | orchestrator | 2025-04-05 12:37:10 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:37:10.101580 | orchestrator | 2025-04-05 12:37:10 | INFO  | Task 44c3d2ce-3f17-4cb8-905c-1fe72912dee9 is in state STARTED 2025-04-05 12:37:13.137466 | orchestrator | 2025-04-05 12:37:10 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:13.137599 | orchestrator | 2025-04-05 12:37:13 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:37:13.137962 | orchestrator | 2025-04-05 12:37:13 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:37:13.138390 | orchestrator | 2025-04-05 12:37:13 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:37:13.140204 | orchestrator | 2025-04-05 12:37:13 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:37:13.140849 | orchestrator | 2025-04-05 12:37:13 | INFO  | Task 44c3d2ce-3f17-4cb8-905c-1fe72912dee9 is in state STARTED 2025-04-05 12:37:13.141754 | orchestrator | 2025-04-05 12:37:13 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:16.175118 | orchestrator | 2025-04-05 12:37:16 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:37:16.176299 | orchestrator | 2025-04-05 12:37:16 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:37:16.177176 | orchestrator | 2025-04-05 12:37:16 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:37:16.177936 | orchestrator | 2025-04-05 12:37:16 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:37:16.179293 | orchestrator | 2025-04-05 12:37:16 | INFO  | Task 44c3d2ce-3f17-4cb8-905c-1fe72912dee9 is in state SUCCESS 2025-04-05 12:37:16.180186 | orchestrator | 2025-04-05 12:37:16 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:19.225709 | orchestrator | 2025-04-05 12:37:19 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:37:19.225936 | orchestrator | 2025-04-05 12:37:19 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:37:19.226674 | orchestrator | 2025-04-05 12:37:19 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:37:19.227504 | orchestrator | 2025-04-05 12:37:19 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:37:22.271976 | orchestrator | 2025-04-05 12:37:19 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:22.272099 | orchestrator | 2025-04-05 12:37:22 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:37:22.272377 | orchestrator | 2025-04-05 12:37:22 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state STARTED 2025-04-05 12:37:22.273030 | orchestrator | 2025-04-05 12:37:22 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:37:22.273830 | orchestrator | 2025-04-05 12:37:22 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:37:25.311851 | orchestrator | 2025-04-05 12:37:22 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:25.311984 | orchestrator | 2025-04-05 12:37:25 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:37:25.313177 | orchestrator | 2025-04-05 12:37:25 | INFO  | Task 6d733e83-3ce4-48cd-a0c7-c6af5aeed1d3 is in state SUCCESS 2025-04-05 12:37:25.316075 | orchestrator | 2025-04-05 12:37:25.316395 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-05 12:37:25.316416 | orchestrator | 2025-04-05 12:37:25.316431 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-04-05 12:37:25.316445 | orchestrator | 2025-04-05 12:37:25.316460 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-04-05 12:37:25.316474 | orchestrator | Saturday 05 April 2025 12:36:46 +0000 (0:00:00.371) 0:00:00.371 ******** 2025-04-05 12:37:25.316489 | orchestrator | changed: [testbed-manager] 2025-04-05 12:37:25.316505 | orchestrator | 2025-04-05 12:37:25.316520 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-04-05 12:37:25.316533 | orchestrator | Saturday 05 April 2025 12:36:47 +0000 (0:00:01.158) 0:00:01.529 ******** 2025-04-05 12:37:25.316548 | orchestrator | changed: [testbed-manager] 2025-04-05 12:37:25.316562 | orchestrator | 2025-04-05 12:37:25.316576 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-04-05 12:37:25.316590 | orchestrator | Saturday 05 April 2025 12:36:48 +0000 (0:00:00.931) 0:00:02.460 ******** 2025-04-05 12:37:25.316604 | orchestrator | changed: [testbed-manager] 2025-04-05 12:37:25.316618 | orchestrator | 2025-04-05 12:37:25.316632 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-04-05 12:37:25.316646 | orchestrator | Saturday 05 April 2025 12:36:49 +0000 (0:00:00.865) 0:00:03.326 ******** 2025-04-05 12:37:25.316660 | orchestrator | changed: [testbed-manager] 2025-04-05 12:37:25.316675 | orchestrator | 2025-04-05 12:37:25.316689 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-04-05 12:37:25.316703 | orchestrator | Saturday 05 April 2025 12:36:49 +0000 (0:00:00.866) 0:00:04.192 ******** 2025-04-05 12:37:25.316717 | orchestrator | changed: [testbed-manager] 2025-04-05 12:37:25.316756 | orchestrator | 2025-04-05 12:37:25.316771 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-04-05 12:37:25.316785 | orchestrator | Saturday 05 April 2025 12:36:50 +0000 (0:00:00.862) 0:00:05.054 ******** 2025-04-05 12:37:25.316799 | orchestrator | changed: [testbed-manager] 2025-04-05 12:37:25.316814 | orchestrator | 2025-04-05 12:37:25.316828 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-04-05 12:37:25.316842 | orchestrator | Saturday 05 April 2025 12:36:51 +0000 (0:00:00.780) 0:00:05.835 ******** 2025-04-05 12:37:25.316856 | orchestrator | changed: [testbed-manager] 2025-04-05 12:37:25.316870 | orchestrator | 2025-04-05 12:37:25.316884 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-04-05 12:37:25.316898 | orchestrator | Saturday 05 April 2025 12:36:52 +0000 (0:00:01.061) 0:00:06.897 ******** 2025-04-05 12:37:25.316912 | orchestrator | changed: [testbed-manager] 2025-04-05 12:37:25.316926 | orchestrator | 2025-04-05 12:37:25.316940 | orchestrator | TASK [Create admin user] ******************************************************* 2025-04-05 12:37:25.316954 | orchestrator | Saturday 05 April 2025 12:36:53 +0000 (0:00:01.041) 0:00:07.938 ******** 2025-04-05 12:37:25.316968 | orchestrator | changed: [testbed-manager] 2025-04-05 12:37:25.316983 | orchestrator | 2025-04-05 12:37:25.316999 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-04-05 12:37:25.317015 | orchestrator | Saturday 05 April 2025 12:37:10 +0000 (0:00:16.445) 0:00:24.383 ******** 2025-04-05 12:37:25.317030 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:37:25.317045 | orchestrator | 2025-04-05 12:37:25.317061 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-04-05 12:37:25.317076 | orchestrator | 2025-04-05 12:37:25.317092 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-04-05 12:37:25.317122 | orchestrator | Saturday 05 April 2025 12:37:10 +0000 (0:00:00.546) 0:00:24.930 ******** 2025-04-05 12:37:25.317138 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:37:25.317154 | orchestrator | 2025-04-05 12:37:25.317170 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-04-05 12:37:25.317200 | orchestrator | 2025-04-05 12:37:25.317216 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-04-05 12:37:25.317231 | orchestrator | Saturday 05 April 2025 12:37:12 +0000 (0:00:01.961) 0:00:26.891 ******** 2025-04-05 12:37:25.317246 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:37:25.317262 | orchestrator | 2025-04-05 12:37:25.317278 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-04-05 12:37:25.317293 | orchestrator | 2025-04-05 12:37:25.317308 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-04-05 12:37:25.317324 | orchestrator | Saturday 05 April 2025 12:37:14 +0000 (0:00:01.648) 0:00:28.540 ******** 2025-04-05 12:37:25.317340 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:37:25.317354 | orchestrator | 2025-04-05 12:37:25.317368 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:37:25.317382 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-05 12:37:25.317398 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:37:25.317412 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:37:25.317426 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:37:25.317440 | orchestrator | 2025-04-05 12:37:25.317454 | orchestrator | 2025-04-05 12:37:25.317468 | orchestrator | 2025-04-05 12:37:25.317481 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:37:25.317495 | orchestrator | Saturday 05 April 2025 12:37:15 +0000 (0:00:01.265) 0:00:29.805 ******** 2025-04-05 12:37:25.317509 | orchestrator | =============================================================================== 2025-04-05 12:37:25.317523 | orchestrator | Create admin user ------------------------------------------------------ 16.45s 2025-04-05 12:37:25.317580 | orchestrator | Restart ceph manager service -------------------------------------------- 4.87s 2025-04-05 12:37:25.317597 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.16s 2025-04-05 12:37:25.317611 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.06s 2025-04-05 12:37:25.317625 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.04s 2025-04-05 12:37:25.317639 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.93s 2025-04-05 12:37:25.317653 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.87s 2025-04-05 12:37:25.317667 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.87s 2025-04-05 12:37:25.317681 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.86s 2025-04-05 12:37:25.317695 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.78s 2025-04-05 12:37:25.317709 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.55s 2025-04-05 12:37:25.317741 | orchestrator | 2025-04-05 12:37:25.317756 | orchestrator | 2025-04-05 12:37:25.317770 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:37:25.317784 | orchestrator | 2025-04-05 12:37:25.317798 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:37:25.317812 | orchestrator | Saturday 05 April 2025 12:35:35 +0000 (0:00:00.167) 0:00:00.167 ******** 2025-04-05 12:37:25.317826 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:37:25.317841 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:37:25.317855 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:37:25.317869 | orchestrator | 2025-04-05 12:37:25.317883 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:37:25.317897 | orchestrator | Saturday 05 April 2025 12:35:35 +0000 (0:00:00.251) 0:00:00.419 ******** 2025-04-05 12:37:25.317919 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-04-05 12:37:25.317933 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-04-05 12:37:25.317947 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-04-05 12:37:25.317961 | orchestrator | 2025-04-05 12:37:25.317975 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-04-05 12:37:25.317989 | orchestrator | 2025-04-05 12:37:25.318003 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-04-05 12:37:25.318067 | orchestrator | Saturday 05 April 2025 12:35:36 +0000 (0:00:00.326) 0:00:00.746 ******** 2025-04-05 12:37:25.318086 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:37:25.318101 | orchestrator | 2025-04-05 12:37:25.318115 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-04-05 12:37:25.318129 | orchestrator | Saturday 05 April 2025 12:35:36 +0000 (0:00:00.719) 0:00:01.466 ******** 2025-04-05 12:37:25.318143 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-04-05 12:37:25.318157 | orchestrator | 2025-04-05 12:37:25.318177 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-04-05 12:37:25.318191 | orchestrator | Saturday 05 April 2025 12:35:40 +0000 (0:00:03.358) 0:00:04.824 ******** 2025-04-05 12:37:25.318205 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-04-05 12:37:25.318219 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-04-05 12:37:25.318233 | orchestrator | 2025-04-05 12:37:25.318247 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-04-05 12:37:25.318261 | orchestrator | Saturday 05 April 2025 12:35:46 +0000 (0:00:05.940) 0:00:10.764 ******** 2025-04-05 12:37:25.318275 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-04-05 12:37:25.318289 | orchestrator | 2025-04-05 12:37:25.318303 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-04-05 12:37:25.318317 | orchestrator | Saturday 05 April 2025 12:35:49 +0000 (0:00:03.161) 0:00:13.925 ******** 2025-04-05 12:37:25.318330 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-05 12:37:25.318344 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-04-05 12:37:25.318358 | orchestrator | 2025-04-05 12:37:25.318372 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-04-05 12:37:25.318386 | orchestrator | Saturday 05 April 2025 12:35:53 +0000 (0:00:03.782) 0:00:17.708 ******** 2025-04-05 12:37:25.318399 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-05 12:37:25.318413 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-04-05 12:37:25.318427 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-04-05 12:37:25.318441 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-04-05 12:37:25.318455 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-04-05 12:37:25.318469 | orchestrator | 2025-04-05 12:37:25.318483 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-04-05 12:37:25.318497 | orchestrator | Saturday 05 April 2025 12:36:08 +0000 (0:00:14.928) 0:00:32.637 ******** 2025-04-05 12:37:25.318510 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-04-05 12:37:25.318524 | orchestrator | 2025-04-05 12:37:25.318538 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-04-05 12:37:25.318551 | orchestrator | Saturday 05 April 2025 12:36:12 +0000 (0:00:04.394) 0:00:37.031 ******** 2025-04-05 12:37:25.318577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-05 12:37:25.318604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-05 12:37:25.318620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-05 12:37:25.318636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.318651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.318675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.318698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.318714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.318783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.318800 | orchestrator | 2025-04-05 12:37:25.318815 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-04-05 12:37:25.318829 | orchestrator | Saturday 05 April 2025 12:36:14 +0000 (0:00:02.322) 0:00:39.353 ******** 2025-04-05 12:37:25.318843 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-04-05 12:37:25.318857 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-04-05 12:37:25.318870 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-04-05 12:37:25.318884 | orchestrator | 2025-04-05 12:37:25.318898 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-04-05 12:37:25.318911 | orchestrator | Saturday 05 April 2025 12:36:17 +0000 (0:00:02.323) 0:00:41.677 ******** 2025-04-05 12:37:25.318925 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:37:25.318939 | orchestrator | 2025-04-05 12:37:25.318953 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-04-05 12:37:25.318967 | orchestrator | Saturday 05 April 2025 12:36:17 +0000 (0:00:00.125) 0:00:41.803 ******** 2025-04-05 12:37:25.318981 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:37:25.318995 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:37:25.319008 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:37:25.319022 | orchestrator | 2025-04-05 12:37:25.319036 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-04-05 12:37:25.319050 | orchestrator | Saturday 05 April 2025 12:36:17 +0000 (0:00:00.420) 0:00:42.224 ******** 2025-04-05 12:37:25.319070 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:37:25.319085 | orchestrator | 2025-04-05 12:37:25.319098 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-04-05 12:37:25.319112 | orchestrator | Saturday 05 April 2025 12:36:18 +0000 (0:00:00.602) 0:00:42.826 ******** 2025-04-05 12:37:25.319135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-05 12:37:25.319151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-05 12:37:25.319166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-05 12:37:25.319187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.319213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.319237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.319252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.319267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.319281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.319295 | orchestrator | 2025-04-05 12:37:25.319310 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-04-05 12:37:25.319324 | orchestrator | Saturday 05 April 2025 12:36:22 +0000 (0:00:03.943) 0:00:46.770 ******** 2025-04-05 12:37:25.319349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-05 12:37:25.319381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:37:25.319402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:37:25.319417 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:37:25.319442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-05 12:37:25.319457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:37:25.319472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:37:25.319492 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:37:25.319507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-05 12:37:25.319530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:37:25.319545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:37:25.319559 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:37:25.319573 | orchestrator | 2025-04-05 12:37:25.319587 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-04-05 12:37:25.319601 | orchestrator | Saturday 05 April 2025 12:36:23 +0000 (0:00:01.172) 0:00:47.942 ******** 2025-04-05 12:37:25.319626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-05 12:37:25.319642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:37:25.319666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:37:25.319681 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:37:25.319703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-05 12:37:25.319718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:37:25.319774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:37:25.319790 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:37:25.319805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-05 12:37:25.319827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:37:25.319842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:37:25.319862 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:37:25.319877 | orchestrator | 2025-04-05 12:37:25.319891 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-04-05 12:37:25.319905 | orchestrator | Saturday 05 April 2025 12:36:25 +0000 (0:00:01.630) 0:00:49.572 ******** 2025-04-05 12:37:25.319919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-05 12:37:25.319944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-05 12:37:25.319966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-05 12:37:25.319981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.320003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.320028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.320044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.320058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.320079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.320093 | orchestrator | 2025-04-05 12:37:25.320107 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-04-05 12:37:25.320121 | orchestrator | Saturday 05 April 2025 12:36:29 +0000 (0:00:04.068) 0:00:53.641 ******** 2025-04-05 12:37:25.320135 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:37:25.320149 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:37:25.320163 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:37:25.320176 | orchestrator | 2025-04-05 12:37:25.320190 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-04-05 12:37:25.320204 | orchestrator | Saturday 05 April 2025 12:36:31 +0000 (0:00:02.308) 0:00:55.950 ******** 2025-04-05 12:37:25.320217 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-05 12:37:25.320231 | orchestrator | 2025-04-05 12:37:25.320249 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-04-05 12:37:25.320264 | orchestrator | Saturday 05 April 2025 12:36:32 +0000 (0:00:01.351) 0:00:57.302 ******** 2025-04-05 12:37:25.320277 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:37:25.320291 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:37:25.320304 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:37:25.320318 | orchestrator | 2025-04-05 12:37:25.320332 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-04-05 12:37:25.320346 | orchestrator | Saturday 05 April 2025 12:36:33 +0000 (0:00:00.873) 0:00:58.176 ******** 2025-04-05 12:37:25.320367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-05 12:37:25.320382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-05 12:37:25.320416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-05 12:37:25.320431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.320446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.320467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.320482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.320518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.320533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.320548 | orchestrator | 2025-04-05 12:37:25.320562 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-04-05 12:37:25.320577 | orchestrator | Saturday 05 April 2025 12:36:45 +0000 (0:00:11.763) 0:01:09.939 ******** 2025-04-05 12:37:25.320591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-05 12:37:25.320611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:37:25.320626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:37:25.320647 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:37:25.320673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-05 12:37:25.320689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:37:25.320703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:37:25.320717 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:37:25.320749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-05 12:37:25.320783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-05 12:37:25.320809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:37:25.320824 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:37:25.320837 | orchestrator | 2025-04-05 12:37:25.320852 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-04-05 12:37:25.320866 | orchestrator | Saturday 05 April 2025 12:36:46 +0000 (0:00:01.520) 0:01:11.460 ******** 2025-04-05 12:37:25.320880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-05 12:37:25.320895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-05 12:37:25.320917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-05 12:37:25.320942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.320965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.320979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.320994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.321008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.321028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:37:25.321043 | orchestrator | 2025-04-05 12:37:25.321057 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-04-05 12:37:25.321078 | orchestrator | Saturday 05 April 2025 12:36:50 +0000 (0:00:03.308) 0:01:14.768 ******** 2025-04-05 12:37:25.321092 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:37:25.321112 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:37:25.321126 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:37:25.321140 | orchestrator | 2025-04-05 12:37:25.321154 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-04-05 12:37:25.321168 | orchestrator | Saturday 05 April 2025 12:36:50 +0000 (0:00:00.273) 0:01:15.042 ******** 2025-04-05 12:37:25.321181 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:37:25.321195 | orchestrator | 2025-04-05 12:37:25.321209 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-04-05 12:37:25.321223 | orchestrator | Saturday 05 April 2025 12:36:53 +0000 (0:00:02.495) 0:01:17.538 ******** 2025-04-05 12:37:25.321236 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:37:25.321250 | orchestrator | 2025-04-05 12:37:25.321264 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-04-05 12:37:25.321278 | orchestrator | Saturday 05 April 2025 12:36:55 +0000 (0:00:02.105) 0:01:19.644 ******** 2025-04-05 12:37:25.321291 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:37:25.321305 | orchestrator | 2025-04-05 12:37:25.321319 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-04-05 12:37:25.321332 | orchestrator | Saturday 05 April 2025 12:37:03 +0000 (0:00:08.129) 0:01:27.773 ******** 2025-04-05 12:37:25.321346 | orchestrator | 2025-04-05 12:37:25.321360 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-04-05 12:37:25.321374 | orchestrator | Saturday 05 April 2025 12:37:03 +0000 (0:00:00.057) 0:01:27.831 ******** 2025-04-05 12:37:25.321387 | orchestrator | 2025-04-05 12:37:25.321401 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-04-05 12:37:25.321415 | orchestrator | Saturday 05 April 2025 12:37:03 +0000 (0:00:00.049) 0:01:27.880 ******** 2025-04-05 12:37:25.321428 | orchestrator | 2025-04-05 12:37:25.321442 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-04-05 12:37:25.321455 | orchestrator | Saturday 05 April 2025 12:37:03 +0000 (0:00:00.129) 0:01:28.010 ******** 2025-04-05 12:37:25.321469 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:37:25.321482 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:37:25.321496 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:37:25.321510 | orchestrator | 2025-04-05 12:37:25.321523 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-04-05 12:37:25.321537 | orchestrator | Saturday 05 April 2025 12:37:12 +0000 (0:00:09.419) 0:01:37.429 ******** 2025-04-05 12:37:25.321551 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:37:25.321564 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:37:25.321578 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:37:25.321592 | orchestrator | 2025-04-05 12:37:25.321605 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-04-05 12:37:25.321619 | orchestrator | Saturday 05 April 2025 12:37:17 +0000 (0:00:05.041) 0:01:42.470 ******** 2025-04-05 12:37:25.321632 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:37:25.321646 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:37:25.321660 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:37:25.321673 | orchestrator | 2025-04-05 12:37:25.321687 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:37:25.321701 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-05 12:37:25.321715 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-05 12:37:25.321780 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-05 12:37:25.321803 | orchestrator | 2025-04-05 12:37:25.321817 | orchestrator | 2025-04-05 12:37:25.321831 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:37:25.321845 | orchestrator | Saturday 05 April 2025 12:37:23 +0000 (0:00:05.324) 0:01:47.795 ******** 2025-04-05 12:37:25.321865 | orchestrator | =============================================================================== 2025-04-05 12:37:25.321879 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.93s 2025-04-05 12:37:25.321893 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.76s 2025-04-05 12:37:25.321907 | orchestrator | barbican : Restart barbican-api container ------------------------------- 9.42s 2025-04-05 12:37:25.321921 | orchestrator | barbican : Running barbican bootstrap container ------------------------- 8.13s 2025-04-05 12:37:25.321934 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 5.94s 2025-04-05 12:37:25.321948 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.32s 2025-04-05 12:37:25.321962 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.04s 2025-04-05 12:37:25.321975 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.39s 2025-04-05 12:37:25.321989 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.07s 2025-04-05 12:37:25.322002 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.94s 2025-04-05 12:37:25.322057 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.78s 2025-04-05 12:37:25.322075 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.36s 2025-04-05 12:37:25.322089 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.31s 2025-04-05 12:37:25.322110 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.16s 2025-04-05 12:37:25.322951 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.50s 2025-04-05 12:37:25.322971 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.32s 2025-04-05 12:37:25.322981 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.32s 2025-04-05 12:37:25.322991 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.31s 2025-04-05 12:37:25.323001 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.11s 2025-04-05 12:37:25.323011 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.63s 2025-04-05 12:37:25.323022 | orchestrator | 2025-04-05 12:37:25 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:37:25.323036 | orchestrator | 2025-04-05 12:37:25 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:37:25.323144 | orchestrator | 2025-04-05 12:37:25 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:37:25.323166 | orchestrator | 2025-04-05 12:37:25 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:28.355415 | orchestrator | 2025-04-05 12:37:28 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:37:28.355559 | orchestrator | 2025-04-05 12:37:28 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:37:28.356038 | orchestrator | 2025-04-05 12:37:28 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:37:28.356547 | orchestrator | 2025-04-05 12:37:28 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:37:28.356639 | orchestrator | 2025-04-05 12:37:28 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:31.401195 | orchestrator | 2025-04-05 12:37:31 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:37:31.401394 | orchestrator | 2025-04-05 12:37:31 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:37:31.402099 | orchestrator | 2025-04-05 12:37:31 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:37:31.405865 | orchestrator | 2025-04-05 12:37:31 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:37:34.433695 | orchestrator | 2025-04-05 12:37:31 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:34.433873 | orchestrator | 2025-04-05 12:37:34 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:37:34.436135 | orchestrator | 2025-04-05 12:37:34 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:37:34.441154 | orchestrator | 2025-04-05 12:37:34 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:37:34.442843 | orchestrator | 2025-04-05 12:37:34 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:37:37.484424 | orchestrator | 2025-04-05 12:37:34 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:37.484537 | orchestrator | 2025-04-05 12:37:37 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:37:37.485081 | orchestrator | 2025-04-05 12:37:37 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:37:37.535007 | orchestrator | 2025-04-05 12:37:37 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:37:40.516833 | orchestrator | 2025-04-05 12:37:37 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:37:40.516908 | orchestrator | 2025-04-05 12:37:37 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:40.516927 | orchestrator | 2025-04-05 12:37:40 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:37:40.517537 | orchestrator | 2025-04-05 12:37:40 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:37:40.517662 | orchestrator | 2025-04-05 12:37:40 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:37:40.518534 | orchestrator | 2025-04-05 12:37:40 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:37:43.546617 | orchestrator | 2025-04-05 12:37:40 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:43.546797 | orchestrator | 2025-04-05 12:37:43 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:37:43.546962 | orchestrator | 2025-04-05 12:37:43 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:37:43.547795 | orchestrator | 2025-04-05 12:37:43 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:37:43.548983 | orchestrator | 2025-04-05 12:37:43 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:37:46.575953 | orchestrator | 2025-04-05 12:37:43 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:46.576067 | orchestrator | 2025-04-05 12:37:46 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:37:46.576584 | orchestrator | 2025-04-05 12:37:46 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:37:46.576614 | orchestrator | 2025-04-05 12:37:46 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:37:46.577271 | orchestrator | 2025-04-05 12:37:46 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:37:49.613888 | orchestrator | 2025-04-05 12:37:46 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:49.614147 | orchestrator | 2025-04-05 12:37:49 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:37:49.614704 | orchestrator | 2025-04-05 12:37:49 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:37:49.614750 | orchestrator | 2025-04-05 12:37:49 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:37:49.615433 | orchestrator | 2025-04-05 12:37:49 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:37:52.646985 | orchestrator | 2025-04-05 12:37:49 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:52.647109 | orchestrator | 2025-04-05 12:37:52 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:37:52.647411 | orchestrator | 2025-04-05 12:37:52 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:37:52.648310 | orchestrator | 2025-04-05 12:37:52 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:37:52.649010 | orchestrator | 2025-04-05 12:37:52 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:37:55.674594 | orchestrator | 2025-04-05 12:37:52 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:55.674784 | orchestrator | 2025-04-05 12:37:55 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:37:55.675225 | orchestrator | 2025-04-05 12:37:55 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:37:55.675261 | orchestrator | 2025-04-05 12:37:55 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:37:55.675711 | orchestrator | 2025-04-05 12:37:55 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:37:58.699246 | orchestrator | 2025-04-05 12:37:55 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:37:58.699373 | orchestrator | 2025-04-05 12:37:58 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:01.728752 | orchestrator | 2025-04-05 12:37:58 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:01.728990 | orchestrator | 2025-04-05 12:37:58 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:38:01.729014 | orchestrator | 2025-04-05 12:37:58 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:38:01.729030 | orchestrator | 2025-04-05 12:37:58 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:01.729064 | orchestrator | 2025-04-05 12:38:01 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:01.729655 | orchestrator | 2025-04-05 12:38:01 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:01.729691 | orchestrator | 2025-04-05 12:38:01 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:38:01.730278 | orchestrator | 2025-04-05 12:38:01 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:38:04.763929 | orchestrator | 2025-04-05 12:38:01 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:04.764054 | orchestrator | 2025-04-05 12:38:04 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:04.764907 | orchestrator | 2025-04-05 12:38:04 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:04.765571 | orchestrator | 2025-04-05 12:38:04 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:38:04.765605 | orchestrator | 2025-04-05 12:38:04 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:38:07.793314 | orchestrator | 2025-04-05 12:38:04 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:07.793450 | orchestrator | 2025-04-05 12:38:07 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:07.793841 | orchestrator | 2025-04-05 12:38:07 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:07.794525 | orchestrator | 2025-04-05 12:38:07 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:38:07.795432 | orchestrator | 2025-04-05 12:38:07 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:38:10.845081 | orchestrator | 2025-04-05 12:38:07 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:10.845200 | orchestrator | 2025-04-05 12:38:10 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:10.846097 | orchestrator | 2025-04-05 12:38:10 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:10.846873 | orchestrator | 2025-04-05 12:38:10 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:38:10.847951 | orchestrator | 2025-04-05 12:38:10 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:38:13.902600 | orchestrator | 2025-04-05 12:38:10 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:13.902766 | orchestrator | 2025-04-05 12:38:13 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:13.904465 | orchestrator | 2025-04-05 12:38:13 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:13.907086 | orchestrator | 2025-04-05 12:38:13 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:38:13.909070 | orchestrator | 2025-04-05 12:38:13 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:38:16.961979 | orchestrator | 2025-04-05 12:38:13 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:16.962153 | orchestrator | 2025-04-05 12:38:16 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:16.963785 | orchestrator | 2025-04-05 12:38:16 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:16.965544 | orchestrator | 2025-04-05 12:38:16 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:38:16.968084 | orchestrator | 2025-04-05 12:38:16 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:38:20.018174 | orchestrator | 2025-04-05 12:38:16 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:20.018310 | orchestrator | 2025-04-05 12:38:20 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:20.019407 | orchestrator | 2025-04-05 12:38:20 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:20.021988 | orchestrator | 2025-04-05 12:38:20 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:38:20.024567 | orchestrator | 2025-04-05 12:38:20 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:38:23.074587 | orchestrator | 2025-04-05 12:38:20 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:23.074712 | orchestrator | 2025-04-05 12:38:23 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:23.075945 | orchestrator | 2025-04-05 12:38:23 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:23.077408 | orchestrator | 2025-04-05 12:38:23 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state STARTED 2025-04-05 12:38:23.078995 | orchestrator | 2025-04-05 12:38:23 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:38:26.136390 | orchestrator | 2025-04-05 12:38:23 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:26.136528 | orchestrator | 2025-04-05 12:38:26 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:26.140513 | orchestrator | 2025-04-05 12:38:26 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:26.144680 | orchestrator | 2025-04-05 12:38:26 | INFO  | Task 4a0ea62f-fce6-4316-b008-939a3ec1936d is in state SUCCESS 2025-04-05 12:38:26.146538 | orchestrator | 2025-04-05 12:38:26.146581 | orchestrator | 2025-04-05 12:38:26.146593 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:38:26.146605 | orchestrator | 2025-04-05 12:38:26.146617 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:38:26.146628 | orchestrator | Saturday 05 April 2025 12:35:36 +0000 (0:00:00.377) 0:00:00.377 ******** 2025-04-05 12:38:26.146639 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:38:26.146651 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:38:26.146663 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:38:26.146674 | orchestrator | 2025-04-05 12:38:26.146685 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:38:26.146697 | orchestrator | Saturday 05 April 2025 12:35:36 +0000 (0:00:00.497) 0:00:00.875 ******** 2025-04-05 12:38:26.146709 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-04-05 12:38:26.146771 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-04-05 12:38:26.146785 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-04-05 12:38:26.146796 | orchestrator | 2025-04-05 12:38:26.146807 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-04-05 12:38:26.146819 | orchestrator | 2025-04-05 12:38:26.146829 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-05 12:38:26.146840 | orchestrator | Saturday 05 April 2025 12:35:37 +0000 (0:00:00.589) 0:00:01.464 ******** 2025-04-05 12:38:26.146852 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:38:26.146865 | orchestrator | 2025-04-05 12:38:26.146876 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-04-05 12:38:26.146947 | orchestrator | Saturday 05 April 2025 12:35:37 +0000 (0:00:00.520) 0:00:01.984 ******** 2025-04-05 12:38:26.147099 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-04-05 12:38:26.147115 | orchestrator | 2025-04-05 12:38:26.147126 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-04-05 12:38:26.147153 | orchestrator | Saturday 05 April 2025 12:35:41 +0000 (0:00:03.880) 0:00:05.865 ******** 2025-04-05 12:38:26.147167 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-04-05 12:38:26.147180 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-04-05 12:38:26.147192 | orchestrator | 2025-04-05 12:38:26.147204 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-04-05 12:38:26.147217 | orchestrator | Saturday 05 April 2025 12:35:48 +0000 (0:00:06.285) 0:00:12.151 ******** 2025-04-05 12:38:26.147231 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-05 12:38:26.147243 | orchestrator | 2025-04-05 12:38:26.147255 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-04-05 12:38:26.147268 | orchestrator | Saturday 05 April 2025 12:35:51 +0000 (0:00:03.225) 0:00:15.376 ******** 2025-04-05 12:38:26.147280 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-05 12:38:26.147312 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-04-05 12:38:26.147325 | orchestrator | 2025-04-05 12:38:26.147775 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-04-05 12:38:26.147791 | orchestrator | Saturday 05 April 2025 12:35:54 +0000 (0:00:03.682) 0:00:19.059 ******** 2025-04-05 12:38:26.147802 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-05 12:38:26.147814 | orchestrator | 2025-04-05 12:38:26.147826 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-04-05 12:38:26.147837 | orchestrator | Saturday 05 April 2025 12:35:57 +0000 (0:00:02.939) 0:00:21.999 ******** 2025-04-05 12:38:26.147849 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-04-05 12:38:26.147860 | orchestrator | 2025-04-05 12:38:26.147871 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-04-05 12:38:26.147881 | orchestrator | Saturday 05 April 2025 12:36:01 +0000 (0:00:03.668) 0:00:25.668 ******** 2025-04-05 12:38:26.147895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-05 12:38:26.147987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-05 12:38:26.148006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-05 12:38:26.148019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.148188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.148203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.148216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.148432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.148452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.148465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.148492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.148505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.148517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.148529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.148565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.148681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.148700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.148747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.148760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.148772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.148783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.148795 | orchestrator | 2025-04-05 12:38:26.148806 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-04-05 12:38:26.148818 | orchestrator | Saturday 05 April 2025 12:36:04 +0000 (0:00:02.837) 0:00:28.506 ******** 2025-04-05 12:38:26.148830 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:38:26.148842 | orchestrator | 2025-04-05 12:38:26.148853 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-04-05 12:38:26.148931 | orchestrator | Saturday 05 April 2025 12:36:04 +0000 (0:00:00.107) 0:00:28.613 ******** 2025-04-05 12:38:26.148949 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:38:26.148962 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:38:26.148974 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:38:26.148986 | orchestrator | 2025-04-05 12:38:26.148999 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-05 12:38:26.149011 | orchestrator | Saturday 05 April 2025 12:36:04 +0000 (0:00:00.369) 0:00:28.983 ******** 2025-04-05 12:38:26.149024 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:38:26.149036 | orchestrator | 2025-04-05 12:38:26.149049 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-04-05 12:38:26.149061 | orchestrator | Saturday 05 April 2025 12:36:05 +0000 (0:00:00.556) 0:00:29.540 ******** 2025-04-05 12:38:26.149080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-05 12:38:26.149093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-05 12:38:26.149105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-05 12:38:26.149117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.149157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.149171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.149188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.149200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.149212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.149224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.149236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.149272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.149291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.149304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.149315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.149327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.149339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.149351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.149362 | orchestrator | 2025-04-05 12:38:26.149374 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-04-05 12:38:26.149408 | orchestrator | Saturday 05 April 2025 12:36:10 +0000 (0:00:05.291) 0:00:34.832 ******** 2025-04-05 12:38:26.149427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-05 12:38:26.149439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-05 12:38:26.149451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.149463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.149475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.149487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.149504 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:38:26.149539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-05 12:38:26.149553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-05 12:38:26.149567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.149580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.149593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.149606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.149625 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:38:26.149666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-05 12:38:26.149681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-05 12:38:26.149695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.149708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.149746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.149768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.149795 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:38:26.149808 | orchestrator | 2025-04-05 12:38:26.149821 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-04-05 12:38:26.149834 | orchestrator | Saturday 05 April 2025 12:36:13 +0000 (0:00:02.438) 0:00:37.271 ******** 2025-04-05 12:38:26.149876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-05 12:38:26.149891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-05 12:38:26.149905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.149917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.149929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.149940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.149958 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:38:26.149995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-05 12:38:26.150009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-05 12:38:26.150067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.150082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.150094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.150105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.150129 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:38:26.150168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-05 12:38:26.150182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-05 12:38:26.150194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.150206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.150218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.150229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.150247 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:38:26.150258 | orchestrator | 2025-04-05 12:38:26.150270 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-04-05 12:38:26.150281 | orchestrator | Saturday 05 April 2025 12:36:14 +0000 (0:00:01.701) 0:00:38.972 ******** 2025-04-05 12:38:26.150316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-05 12:38:26.150330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-05 12:38:26.150342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-05 12:38:26.150353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.150567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.150609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.150656 | orchestrator | 2025-04-05 12:38:26.150667 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-04-05 12:38:26.150679 | orchestrator | Saturday 05 April 2025 12:36:21 +0000 (0:00:07.030) 0:00:46.002 ******** 2025-04-05 12:38:26.150690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-05 12:38:26.150702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-05 12:38:26.150740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-05 12:38:26.150753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.150983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.150998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.151021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151033 | orchestrator | 2025-04-05 12:38:26.151049 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-04-05 12:38:26.151061 | orchestrator | Saturday 05 April 2025 12:36:43 +0000 (0:00:21.750) 0:01:07.753 ******** 2025-04-05 12:38:26.151072 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-04-05 12:38:26.151084 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-04-05 12:38:26.151095 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-04-05 12:38:26.151106 | orchestrator | 2025-04-05 12:38:26.151117 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-04-05 12:38:26.151128 | orchestrator | Saturday 05 April 2025 12:36:50 +0000 (0:00:06.955) 0:01:14.708 ******** 2025-04-05 12:38:26.151139 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-04-05 12:38:26.151150 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-04-05 12:38:26.151161 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-04-05 12:38:26.151172 | orchestrator | 2025-04-05 12:38:26.151183 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-04-05 12:38:26.151201 | orchestrator | Saturday 05 April 2025 12:36:55 +0000 (0:00:05.060) 0:01:19.769 ******** 2025-04-05 12:38:26.151214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-05 12:38:26.151232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-05 12:38:26.151244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-05 12:38:26.151262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.151274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.151325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.151377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.151434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.151457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.151481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151492 | orchestrator | 2025-04-05 12:38:26.151503 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-04-05 12:38:26.151514 | orchestrator | Saturday 05 April 2025 12:36:58 +0000 (0:00:02.580) 0:01:22.350 ******** 2025-04-05 12:38:26.151531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-05 12:38:26.151549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-05 12:38:26.151561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-05 12:38:26.151572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.151584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.151630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.151688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.151774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.151798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.151832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151844 | orchestrator | 2025-04-05 12:38:26.151855 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-05 12:38:26.151867 | orchestrator | Saturday 05 April 2025 12:37:00 +0000 (0:00:02.566) 0:01:24.917 ******** 2025-04-05 12:38:26.151878 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:38:26.151890 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:38:26.151901 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:38:26.151912 | orchestrator | 2025-04-05 12:38:26.151924 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-04-05 12:38:26.151935 | orchestrator | Saturday 05 April 2025 12:37:01 +0000 (0:00:00.613) 0:01:25.530 ******** 2025-04-05 12:38:26.151947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-05 12:38:26.151958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-05 12:38:26.151970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.151982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.152004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.152016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.152028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.152039 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:38:26.152051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-05 12:38:26.152063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-05 12:38:26.152074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.152091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.152107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.152119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.152132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.152143 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:38:26.152155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-05 12:38:26.152167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-05 12:38:26.152184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.152201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.152214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.152225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.152237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.152248 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:38:26.152260 | orchestrator | 2025-04-05 12:38:26.152271 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-04-05 12:38:26.152282 | orchestrator | Saturday 05 April 2025 12:37:02 +0000 (0:00:01.432) 0:01:26.962 ******** 2025-04-05 12:38:26.152294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-05 12:38:26.152312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-05 12:38:26.152330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-05 12:38:26.152342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.152354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.152366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-05 12:38:26.152383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.152395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.152411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.152424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.152435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.152447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.152458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.152476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.152487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.152503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.152515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.152526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.152538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.152550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-05 12:38:26.152568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-05 12:38:26.152580 | orchestrator | 2025-04-05 12:38:26.152591 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-05 12:38:26.152603 | orchestrator | Saturday 05 April 2025 12:37:07 +0000 (0:00:04.786) 0:01:31.748 ******** 2025-04-05 12:38:26.152614 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:38:26.152630 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:38:26.152642 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:38:26.152653 | orchestrator | 2025-04-05 12:38:26.152664 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-04-05 12:38:26.152675 | orchestrator | Saturday 05 April 2025 12:37:07 +0000 (0:00:00.310) 0:01:32.059 ******** 2025-04-05 12:38:26.152686 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-04-05 12:38:26.152697 | orchestrator | 2025-04-05 12:38:26.152708 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-04-05 12:38:26.152734 | orchestrator | Saturday 05 April 2025 12:37:09 +0000 (0:00:01.893) 0:01:33.952 ******** 2025-04-05 12:38:26.152746 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-05 12:38:26.152762 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-04-05 12:38:26.152773 | orchestrator | 2025-04-05 12:38:26.152785 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-04-05 12:38:26.152796 | orchestrator | Saturday 05 April 2025 12:37:11 +0000 (0:00:01.910) 0:01:35.863 ******** 2025-04-05 12:38:26.152807 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:38:26.152819 | orchestrator | 2025-04-05 12:38:26.152830 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-04-05 12:38:26.152841 | orchestrator | Saturday 05 April 2025 12:37:24 +0000 (0:00:12.786) 0:01:48.649 ******** 2025-04-05 12:38:26.152852 | orchestrator | 2025-04-05 12:38:26.152863 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-04-05 12:38:26.152874 | orchestrator | Saturday 05 April 2025 12:37:24 +0000 (0:00:00.366) 0:01:49.016 ******** 2025-04-05 12:38:26.152885 | orchestrator | 2025-04-05 12:38:26.152897 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-04-05 12:38:26.152908 | orchestrator | Saturday 05 April 2025 12:37:25 +0000 (0:00:00.147) 0:01:49.164 ******** 2025-04-05 12:38:26.152919 | orchestrator | 2025-04-05 12:38:26.152930 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-04-05 12:38:26.152941 | orchestrator | Saturday 05 April 2025 12:37:25 +0000 (0:00:00.099) 0:01:49.263 ******** 2025-04-05 12:38:26.152952 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:38:26.152964 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:38:26.152975 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:38:26.152986 | orchestrator | 2025-04-05 12:38:26.152997 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-04-05 12:38:26.153008 | orchestrator | Saturday 05 April 2025 12:37:38 +0000 (0:00:13.059) 0:02:02.322 ******** 2025-04-05 12:38:26.153023 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:38:26.153034 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:38:26.153045 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:38:26.153056 | orchestrator | 2025-04-05 12:38:26.153067 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-04-05 12:38:26.153078 | orchestrator | Saturday 05 April 2025 12:37:44 +0000 (0:00:06.138) 0:02:08.461 ******** 2025-04-05 12:38:26.153089 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:38:26.153100 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:38:26.153111 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:38:26.153122 | orchestrator | 2025-04-05 12:38:26.153133 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-04-05 12:38:26.153144 | orchestrator | Saturday 05 April 2025 12:37:54 +0000 (0:00:10.571) 0:02:19.032 ******** 2025-04-05 12:38:26.153155 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:38:26.153166 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:38:26.153177 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:38:26.153188 | orchestrator | 2025-04-05 12:38:26.153199 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-04-05 12:38:26.153211 | orchestrator | Saturday 05 April 2025 12:38:05 +0000 (0:00:10.802) 0:02:29.835 ******** 2025-04-05 12:38:26.153222 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:38:26.153233 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:38:26.153244 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:38:26.153275 | orchestrator | 2025-04-05 12:38:26.153286 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-04-05 12:38:26.153298 | orchestrator | Saturday 05 April 2025 12:38:15 +0000 (0:00:09.754) 0:02:39.589 ******** 2025-04-05 12:38:26.153309 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:38:26.153322 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:38:26.153333 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:38:26.153344 | orchestrator | 2025-04-05 12:38:26.153356 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-04-05 12:38:26.153367 | orchestrator | Saturday 05 April 2025 12:38:20 +0000 (0:00:04.812) 0:02:44.401 ******** 2025-04-05 12:38:26.153378 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:38:26.153389 | orchestrator | 2025-04-05 12:38:26.153400 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:38:26.153411 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-05 12:38:26.153423 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-05 12:38:26.153434 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-05 12:38:26.153445 | orchestrator | 2025-04-05 12:38:26.153456 | orchestrator | 2025-04-05 12:38:26.153467 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:38:26.153478 | orchestrator | Saturday 05 April 2025 12:38:25 +0000 (0:00:05.021) 0:02:49.423 ******** 2025-04-05 12:38:26.153489 | orchestrator | =============================================================================== 2025-04-05 12:38:26.153500 | orchestrator | designate : Copying over designate.conf -------------------------------- 21.75s 2025-04-05 12:38:26.153512 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.06s 2025-04-05 12:38:26.153523 | orchestrator | designate : Running Designate bootstrap container ---------------------- 12.79s 2025-04-05 12:38:26.153534 | orchestrator | designate : Restart designate-producer container ----------------------- 10.80s 2025-04-05 12:38:26.153545 | orchestrator | designate : Restart designate-central container ------------------------ 10.57s 2025-04-05 12:38:26.153556 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.75s 2025-04-05 12:38:26.153573 | orchestrator | designate : Copying over config.json files for services ----------------- 7.03s 2025-04-05 12:38:26.153584 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.96s 2025-04-05 12:38:26.153596 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.29s 2025-04-05 12:38:26.153611 | orchestrator | designate : Restart designate-api container ----------------------------- 6.14s 2025-04-05 12:38:29.199038 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.29s 2025-04-05 12:38:29.199148 | orchestrator | designate : Copying over named.conf ------------------------------------- 5.06s 2025-04-05 12:38:29.199163 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 5.02s 2025-04-05 12:38:29.199175 | orchestrator | designate : Restart designate-worker container -------------------------- 4.81s 2025-04-05 12:38:29.199187 | orchestrator | designate : Check designate containers ---------------------------------- 4.79s 2025-04-05 12:38:29.199198 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.88s 2025-04-05 12:38:29.199210 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.68s 2025-04-05 12:38:29.199221 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.67s 2025-04-05 12:38:29.199232 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.23s 2025-04-05 12:38:29.199243 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 2.94s 2025-04-05 12:38:29.199255 | orchestrator | 2025-04-05 12:38:26 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:38:29.199267 | orchestrator | 2025-04-05 12:38:26 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:29.199292 | orchestrator | 2025-04-05 12:38:29 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:29.200967 | orchestrator | 2025-04-05 12:38:29 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:38:29.204807 | orchestrator | 2025-04-05 12:38:29 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:29.208495 | orchestrator | 2025-04-05 12:38:29 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state STARTED 2025-04-05 12:38:32.260116 | orchestrator | 2025-04-05 12:38:29 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:32.260243 | orchestrator | 2025-04-05 12:38:32 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:32.261237 | orchestrator | 2025-04-05 12:38:32 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:38:32.262394 | orchestrator | 2025-04-05 12:38:32 | INFO  | Task 7d8c131c-358f-48de-9f94-ec065acc588c is in state STARTED 2025-04-05 12:38:32.263811 | orchestrator | 2025-04-05 12:38:32 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:32.265571 | orchestrator | 2025-04-05 12:38:32 | INFO  | Task 152c77c8-a72f-4618-9190-fbcba5920784 is in state SUCCESS 2025-04-05 12:38:32.265799 | orchestrator | 2025-04-05 12:38:32 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:32.268274 | orchestrator | 2025-04-05 12:38:32.268312 | orchestrator | 2025-04-05 12:38:32.268328 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:38:32.268344 | orchestrator | 2025-04-05 12:38:32.268359 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:38:32.268390 | orchestrator | Saturday 05 April 2025 12:37:28 +0000 (0:00:00.395) 0:00:00.395 ******** 2025-04-05 12:38:32.268405 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:38:32.268421 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:38:32.268436 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:38:32.268450 | orchestrator | 2025-04-05 12:38:32.268485 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:38:32.268500 | orchestrator | Saturday 05 April 2025 12:37:29 +0000 (0:00:00.426) 0:00:00.822 ******** 2025-04-05 12:38:32.268515 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-04-05 12:38:32.268529 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-04-05 12:38:32.268544 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-04-05 12:38:32.268558 | orchestrator | 2025-04-05 12:38:32.268572 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-04-05 12:38:32.268587 | orchestrator | 2025-04-05 12:38:32.268601 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-04-05 12:38:32.268615 | orchestrator | Saturday 05 April 2025 12:37:29 +0000 (0:00:00.513) 0:00:01.336 ******** 2025-04-05 12:38:32.268630 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:38:32.268646 | orchestrator | 2025-04-05 12:38:32.268661 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-04-05 12:38:32.268675 | orchestrator | Saturday 05 April 2025 12:37:31 +0000 (0:00:01.403) 0:00:02.739 ******** 2025-04-05 12:38:32.268690 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-04-05 12:38:32.268704 | orchestrator | 2025-04-05 12:38:32.268746 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-04-05 12:38:32.268762 | orchestrator | Saturday 05 April 2025 12:37:34 +0000 (0:00:03.522) 0:00:06.262 ******** 2025-04-05 12:38:32.268776 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-04-05 12:38:32.268790 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-04-05 12:38:32.268804 | orchestrator | 2025-04-05 12:38:32.268818 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-04-05 12:38:32.268832 | orchestrator | Saturday 05 April 2025 12:37:40 +0000 (0:00:05.884) 0:00:12.146 ******** 2025-04-05 12:38:32.268846 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-05 12:38:32.268860 | orchestrator | 2025-04-05 12:38:32.268874 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-04-05 12:38:32.268888 | orchestrator | Saturday 05 April 2025 12:37:43 +0000 (0:00:02.951) 0:00:15.098 ******** 2025-04-05 12:38:32.268902 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-05 12:38:32.268916 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-04-05 12:38:32.268930 | orchestrator | 2025-04-05 12:38:32.268944 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-04-05 12:38:32.268957 | orchestrator | Saturday 05 April 2025 12:37:47 +0000 (0:00:03.363) 0:00:18.461 ******** 2025-04-05 12:38:32.268971 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-05 12:38:32.268985 | orchestrator | 2025-04-05 12:38:32.268999 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-04-05 12:38:32.269013 | orchestrator | Saturday 05 April 2025 12:37:50 +0000 (0:00:03.089) 0:00:21.551 ******** 2025-04-05 12:38:32.269026 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-04-05 12:38:32.269040 | orchestrator | 2025-04-05 12:38:32.269054 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-04-05 12:38:32.269068 | orchestrator | Saturday 05 April 2025 12:37:53 +0000 (0:00:03.764) 0:00:25.315 ******** 2025-04-05 12:38:32.269081 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:38:32.269096 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:38:32.269110 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:38:32.269124 | orchestrator | 2025-04-05 12:38:32.269138 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-04-05 12:38:32.269152 | orchestrator | Saturday 05 April 2025 12:37:54 +0000 (0:00:00.386) 0:00:25.701 ******** 2025-04-05 12:38:32.269168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-05 12:38:32.269208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-05 12:38:32.269257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-05 12:38:32.269273 | orchestrator | 2025-04-05 12:38:32.269288 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-04-05 12:38:32.269302 | orchestrator | Saturday 05 April 2025 12:37:55 +0000 (0:00:01.477) 0:00:27.178 ******** 2025-04-05 12:38:32.269315 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:38:32.269329 | orchestrator | 2025-04-05 12:38:32.269343 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-04-05 12:38:32.269357 | orchestrator | Saturday 05 April 2025 12:37:55 +0000 (0:00:00.183) 0:00:27.362 ******** 2025-04-05 12:38:32.269371 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:38:32.269385 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:38:32.269399 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:38:32.269413 | orchestrator | 2025-04-05 12:38:32.269426 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-04-05 12:38:32.269440 | orchestrator | Saturday 05 April 2025 12:37:56 +0000 (0:00:00.449) 0:00:27.811 ******** 2025-04-05 12:38:32.269454 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:38:32.269468 | orchestrator | 2025-04-05 12:38:32.269481 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-04-05 12:38:32.269495 | orchestrator | Saturday 05 April 2025 12:37:57 +0000 (0:00:00.849) 0:00:28.660 ******** 2025-04-05 12:38:32.269517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-05 12:38:32.269541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-05 12:38:32.269557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-05 12:38:32.269571 | orchestrator | 2025-04-05 12:38:32.269586 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-04-05 12:38:32.269606 | orchestrator | Saturday 05 April 2025 12:37:59 +0000 (0:00:01.829) 0:00:30.489 ******** 2025-04-05 12:38:32.269631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-05 12:38:32.269647 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:38:32.269669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-05 12:38:32.269683 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:38:32.269710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-05 12:38:32.269748 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:38:32.269763 | orchestrator | 2025-04-05 12:38:32.269778 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-04-05 12:38:32.269792 | orchestrator | Saturday 05 April 2025 12:37:59 +0000 (0:00:00.596) 0:00:31.085 ******** 2025-04-05 12:38:32.269806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-05 12:38:32.269821 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:38:32.269835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-05 12:38:32.269857 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:38:32.269883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-05 12:38:32.269898 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:38:32.269912 | orchestrator | 2025-04-05 12:38:32.269926 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-04-05 12:38:32.269940 | orchestrator | Saturday 05 April 2025 12:38:00 +0000 (0:00:01.336) 0:00:32.422 ******** 2025-04-05 12:38:32.269960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-05 12:38:32.269975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-05 12:38:32.270000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-05 12:38:32.270085 | orchestrator | 2025-04-05 12:38:32.270104 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-04-05 12:38:32.270118 | orchestrator | Saturday 05 April 2025 12:38:02 +0000 (0:00:01.343) 0:00:33.766 ******** 2025-04-05 12:38:32.270133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-05 12:38:32.270148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-05 12:38:32.270172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-05 12:38:32.270187 | orchestrator | 2025-04-05 12:38:32.270201 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-04-05 12:38:32.270215 | orchestrator | Saturday 05 April 2025 12:38:04 +0000 (0:00:02.008) 0:00:35.774 ******** 2025-04-05 12:38:32.270229 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-04-05 12:38:32.270243 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-04-05 12:38:32.270257 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-04-05 12:38:32.270271 | orchestrator | 2025-04-05 12:38:32.270285 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-04-05 12:38:32.270299 | orchestrator | Saturday 05 April 2025 12:38:05 +0000 (0:00:01.402) 0:00:37.177 ******** 2025-04-05 12:38:32.270313 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:38:32.270335 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:38:32.270349 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:38:32.270363 | orchestrator | 2025-04-05 12:38:32.270377 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-04-05 12:38:32.270390 | orchestrator | Saturday 05 April 2025 12:38:07 +0000 (0:00:01.601) 0:00:38.779 ******** 2025-04-05 12:38:32.270421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-05 12:38:32.270437 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:38:32.270452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-05 12:38:32.270466 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:38:32.270488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-05 12:38:32.270503 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:38:32.270517 | orchestrator | 2025-04-05 12:38:32.270531 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-04-05 12:38:32.270545 | orchestrator | Saturday 05 April 2025 12:38:07 +0000 (0:00:00.602) 0:00:39.381 ******** 2025-04-05 12:38:32.270559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-05 12:38:32.270591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-05 12:38:32.270607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-05 12:38:32.270622 | orchestrator | 2025-04-05 12:38:32.270636 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-04-05 12:38:32.270650 | orchestrator | Saturday 05 April 2025 12:38:09 +0000 (0:00:01.228) 0:00:40.610 ******** 2025-04-05 12:38:32.270663 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:38:32.270677 | orchestrator | 2025-04-05 12:38:32.270691 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-04-05 12:38:32.270704 | orchestrator | Saturday 05 April 2025 12:38:11 +0000 (0:00:02.028) 0:00:42.639 ******** 2025-04-05 12:38:32.270718 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:38:32.270763 | orchestrator | 2025-04-05 12:38:32.270777 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-04-05 12:38:32.270791 | orchestrator | Saturday 05 April 2025 12:38:13 +0000 (0:00:02.429) 0:00:45.068 ******** 2025-04-05 12:38:32.270805 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:38:32.270819 | orchestrator | 2025-04-05 12:38:32.270833 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-04-05 12:38:32.270846 | orchestrator | Saturday 05 April 2025 12:38:24 +0000 (0:00:10.418) 0:00:55.487 ******** 2025-04-05 12:38:32.270860 | orchestrator | 2025-04-05 12:38:32.270874 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-04-05 12:38:32.270887 | orchestrator | Saturday 05 April 2025 12:38:24 +0000 (0:00:00.055) 0:00:55.542 ******** 2025-04-05 12:38:32.270901 | orchestrator | 2025-04-05 12:38:32.270921 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-04-05 12:38:35.314958 | orchestrator | Saturday 05 April 2025 12:38:24 +0000 (0:00:00.053) 0:00:55.596 ******** 2025-04-05 12:38:35.315074 | orchestrator | 2025-04-05 12:38:35.315093 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-04-05 12:38:35.315140 | orchestrator | Saturday 05 April 2025 12:38:24 +0000 (0:00:00.192) 0:00:55.788 ******** 2025-04-05 12:38:35.315155 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:38:35.315171 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:38:35.315185 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:38:35.315199 | orchestrator | 2025-04-05 12:38:35.315213 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:38:35.315229 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-05 12:38:35.315245 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-05 12:38:35.315259 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-05 12:38:35.315273 | orchestrator | 2025-04-05 12:38:35.315287 | orchestrator | 2025-04-05 12:38:35.315302 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:38:35.315316 | orchestrator | Saturday 05 April 2025 12:38:28 +0000 (0:00:04.567) 0:01:00.355 ******** 2025-04-05 12:38:35.315330 | orchestrator | =============================================================================== 2025-04-05 12:38:35.315359 | orchestrator | placement : Running placement bootstrap container ---------------------- 10.42s 2025-04-05 12:38:35.315374 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 5.88s 2025-04-05 12:38:35.315398 | orchestrator | placement : Restart placement-api container ----------------------------- 4.57s 2025-04-05 12:38:35.315423 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.76s 2025-04-05 12:38:35.315445 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.52s 2025-04-05 12:38:35.315467 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.36s 2025-04-05 12:38:35.315489 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.09s 2025-04-05 12:38:35.315511 | orchestrator | service-ks-register : placement | Creating projects --------------------- 2.95s 2025-04-05 12:38:35.315535 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.43s 2025-04-05 12:38:35.315715 | orchestrator | placement : Creating placement databases -------------------------------- 2.03s 2025-04-05 12:38:35.315759 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.01s 2025-04-05 12:38:35.315775 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.83s 2025-04-05 12:38:35.315791 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.60s 2025-04-05 12:38:35.315806 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.48s 2025-04-05 12:38:35.315823 | orchestrator | placement : include_tasks ----------------------------------------------- 1.40s 2025-04-05 12:38:35.315838 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.40s 2025-04-05 12:38:35.315852 | orchestrator | placement : Copying over config.json files for services ----------------- 1.34s 2025-04-05 12:38:35.315866 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.34s 2025-04-05 12:38:35.315880 | orchestrator | placement : Check placement containers ---------------------------------- 1.23s 2025-04-05 12:38:35.315894 | orchestrator | placement : include_tasks ----------------------------------------------- 0.85s 2025-04-05 12:38:35.315926 | orchestrator | 2025-04-05 12:38:35 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:35.316989 | orchestrator | 2025-04-05 12:38:35 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:38:35.317117 | orchestrator | 2025-04-05 12:38:35 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:38:35.317481 | orchestrator | 2025-04-05 12:38:35 | INFO  | Task 7d8c131c-358f-48de-9f94-ec065acc588c is in state SUCCESS 2025-04-05 12:38:35.318314 | orchestrator | 2025-04-05 12:38:35 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:38.374302 | orchestrator | 2025-04-05 12:38:35 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:38.374440 | orchestrator | 2025-04-05 12:38:38 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:38.377919 | orchestrator | 2025-04-05 12:38:38 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:38:38.377959 | orchestrator | 2025-04-05 12:38:38 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:38:38.379227 | orchestrator | 2025-04-05 12:38:38 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:38.379264 | orchestrator | 2025-04-05 12:38:38 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:41.442178 | orchestrator | 2025-04-05 12:38:41 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:41.444350 | orchestrator | 2025-04-05 12:38:41 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:38:41.446701 | orchestrator | 2025-04-05 12:38:41 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:38:41.452475 | orchestrator | 2025-04-05 12:38:41 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:44.493953 | orchestrator | 2025-04-05 12:38:41 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:44.494330 | orchestrator | 2025-04-05 12:38:44 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:44.494975 | orchestrator | 2025-04-05 12:38:44 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:38:44.495012 | orchestrator | 2025-04-05 12:38:44 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:38:44.497086 | orchestrator | 2025-04-05 12:38:44 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:47.534887 | orchestrator | 2025-04-05 12:38:44 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:47.535022 | orchestrator | 2025-04-05 12:38:47 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:47.536399 | orchestrator | 2025-04-05 12:38:47 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:38:47.538264 | orchestrator | 2025-04-05 12:38:47 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:38:47.540921 | orchestrator | 2025-04-05 12:38:47 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:50.575520 | orchestrator | 2025-04-05 12:38:47 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:50.575661 | orchestrator | 2025-04-05 12:38:50 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:50.576196 | orchestrator | 2025-04-05 12:38:50 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:38:50.577084 | orchestrator | 2025-04-05 12:38:50 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:38:50.578254 | orchestrator | 2025-04-05 12:38:50 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:53.610482 | orchestrator | 2025-04-05 12:38:50 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:53.610600 | orchestrator | 2025-04-05 12:38:53 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:53.610812 | orchestrator | 2025-04-05 12:38:53 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:38:53.610841 | orchestrator | 2025-04-05 12:38:53 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:38:53.611216 | orchestrator | 2025-04-05 12:38:53 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:56.659116 | orchestrator | 2025-04-05 12:38:53 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:56.659244 | orchestrator | 2025-04-05 12:38:56 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:56.659656 | orchestrator | 2025-04-05 12:38:56 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:38:56.661753 | orchestrator | 2025-04-05 12:38:56 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:38:56.662162 | orchestrator | 2025-04-05 12:38:56 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:38:59.706996 | orchestrator | 2025-04-05 12:38:56 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:38:59.707115 | orchestrator | 2025-04-05 12:38:59 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:38:59.707926 | orchestrator | 2025-04-05 12:38:59 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:38:59.707953 | orchestrator | 2025-04-05 12:38:59 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:38:59.707972 | orchestrator | 2025-04-05 12:38:59 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:39:02.744266 | orchestrator | 2025-04-05 12:38:59 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:39:02.744388 | orchestrator | 2025-04-05 12:39:02 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:39:02.745630 | orchestrator | 2025-04-05 12:39:02 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:39:02.745944 | orchestrator | 2025-04-05 12:39:02 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:39:02.745974 | orchestrator | 2025-04-05 12:39:02 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:39:05.777390 | orchestrator | 2025-04-05 12:39:02 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:39:05.777513 | orchestrator | 2025-04-05 12:39:05 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:39:05.778791 | orchestrator | 2025-04-05 12:39:05 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:39:05.778830 | orchestrator | 2025-04-05 12:39:05 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:39:08.797212 | orchestrator | 2025-04-05 12:39:05 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:39:08.797323 | orchestrator | 2025-04-05 12:39:05 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:39:08.797354 | orchestrator | 2025-04-05 12:39:08 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:39:08.797633 | orchestrator | 2025-04-05 12:39:08 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:39:08.797663 | orchestrator | 2025-04-05 12:39:08 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:39:08.798185 | orchestrator | 2025-04-05 12:39:08 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:39:11.822172 | orchestrator | 2025-04-05 12:39:08 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:39:11.822292 | orchestrator | 2025-04-05 12:39:11 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:39:11.822669 | orchestrator | 2025-04-05 12:39:11 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:39:11.822716 | orchestrator | 2025-04-05 12:39:11 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:39:11.823938 | orchestrator | 2025-04-05 12:39:11 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:39:14.862310 | orchestrator | 2025-04-05 12:39:11 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:39:14.862438 | orchestrator | 2025-04-05 12:39:14 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:39:14.862702 | orchestrator | 2025-04-05 12:39:14 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:39:14.862779 | orchestrator | 2025-04-05 12:39:14 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:39:14.863409 | orchestrator | 2025-04-05 12:39:14 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:39:17.894764 | orchestrator | 2025-04-05 12:39:14 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:39:17.894890 | orchestrator | 2025-04-05 12:39:17 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:39:20.928181 | orchestrator | 2025-04-05 12:39:17 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:39:20.928306 | orchestrator | 2025-04-05 12:39:17 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:39:20.928326 | orchestrator | 2025-04-05 12:39:17 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:39:20.928342 | orchestrator | 2025-04-05 12:39:17 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:39:20.928375 | orchestrator | 2025-04-05 12:39:20 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:39:23.961327 | orchestrator | 2025-04-05 12:39:20 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:39:23.961427 | orchestrator | 2025-04-05 12:39:20 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:39:23.961442 | orchestrator | 2025-04-05 12:39:20 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:39:23.961455 | orchestrator | 2025-04-05 12:39:20 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:39:23.961482 | orchestrator | 2025-04-05 12:39:23 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:39:23.964168 | orchestrator | 2025-04-05 12:39:23 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:39:23.964666 | orchestrator | 2025-04-05 12:39:23 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:39:23.965139 | orchestrator | 2025-04-05 12:39:23 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:39:26.995256 | orchestrator | 2025-04-05 12:39:23 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:39:26.995384 | orchestrator | 2025-04-05 12:39:26 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:39:26.999478 | orchestrator | 2025-04-05 12:39:26 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:39:26.999616 | orchestrator | 2025-04-05 12:39:26 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:39:27.000311 | orchestrator | 2025-04-05 12:39:26 | INFO  | Task 62773842-39bc-4d5d-bc34-dc9b82245c40 is in state STARTED 2025-04-05 12:39:27.000628 | orchestrator | 2025-04-05 12:39:27 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:39:30.029492 | orchestrator | 2025-04-05 12:39:27 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:39:30.029615 | orchestrator | 2025-04-05 12:39:30 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:39:30.033153 | orchestrator | 2025-04-05 12:39:30 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:39:30.033213 | orchestrator | 2025-04-05 12:39:30 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:39:30.033555 | orchestrator | 2025-04-05 12:39:30 | INFO  | Task 62773842-39bc-4d5d-bc34-dc9b82245c40 is in state STARTED 2025-04-05 12:39:30.034272 | orchestrator | 2025-04-05 12:39:30 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:39:33.061506 | orchestrator | 2025-04-05 12:39:30 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:39:33.061645 | orchestrator | 2025-04-05 12:39:33 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:39:33.064515 | orchestrator | 2025-04-05 12:39:33 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:39:33.064550 | orchestrator | 2025-04-05 12:39:33 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:39:33.064996 | orchestrator | 2025-04-05 12:39:33 | INFO  | Task 62773842-39bc-4d5d-bc34-dc9b82245c40 is in state STARTED 2025-04-05 12:39:33.065641 | orchestrator | 2025-04-05 12:39:33 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:39:36.100798 | orchestrator | 2025-04-05 12:39:33 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:39:36.100924 | orchestrator | 2025-04-05 12:39:36 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:39:36.101260 | orchestrator | 2025-04-05 12:39:36 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:39:36.102901 | orchestrator | 2025-04-05 12:39:36 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:39:36.103997 | orchestrator | 2025-04-05 12:39:36 | INFO  | Task 62773842-39bc-4d5d-bc34-dc9b82245c40 is in state SUCCESS 2025-04-05 12:39:36.105189 | orchestrator | 2025-04-05 12:39:36 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:39:39.153447 | orchestrator | 2025-04-05 12:39:36 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:39:39.153580 | orchestrator | 2025-04-05 12:39:39 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:39:39.154075 | orchestrator | 2025-04-05 12:39:39 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:39:39.155331 | orchestrator | 2025-04-05 12:39:39 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:39:39.155980 | orchestrator | 2025-04-05 12:39:39 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:39:42.199859 | orchestrator | 2025-04-05 12:39:39 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:39:42.200008 | orchestrator | 2025-04-05 12:39:42 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:39:42.202498 | orchestrator | 2025-04-05 12:39:42 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:39:42.205142 | orchestrator | 2025-04-05 12:39:42 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:39:42.206498 | orchestrator | 2025-04-05 12:39:42 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:39:42.206929 | orchestrator | 2025-04-05 12:39:42 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:39:45.248939 | orchestrator | 2025-04-05 12:39:45 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:39:45.250618 | orchestrator | 2025-04-05 12:39:45 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:39:45.253095 | orchestrator | 2025-04-05 12:39:45 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:39:45.254928 | orchestrator | 2025-04-05 12:39:45 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:39:45.255600 | orchestrator | 2025-04-05 12:39:45 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:39:48.294796 | orchestrator | 2025-04-05 12:39:48 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:39:48.296304 | orchestrator | 2025-04-05 12:39:48 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:39:48.298523 | orchestrator | 2025-04-05 12:39:48 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:39:48.300555 | orchestrator | 2025-04-05 12:39:48 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state STARTED 2025-04-05 12:39:48.300885 | orchestrator | 2025-04-05 12:39:48 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:39:51.355771 | orchestrator | 2025-04-05 12:39:51 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:39:51.358106 | orchestrator | 2025-04-05 12:39:51 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:39:51.360765 | orchestrator | 2025-04-05 12:39:51 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:39:51.369220 | orchestrator | 2025-04-05 12:39:51 | INFO  | Task 4b725c58-5f88-4d00-95e1-ea91d1f3f073 is in state SUCCESS 2025-04-05 12:39:51.370138 | orchestrator | 2025-04-05 12:39:51.371539 | orchestrator | 2025-04-05 12:39:51.371582 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:39:51.371598 | orchestrator | 2025-04-05 12:39:51.371657 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:39:51.371674 | orchestrator | Saturday 05 April 2025 12:38:32 +0000 (0:00:00.145) 0:00:00.145 ******** 2025-04-05 12:39:51.371689 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:39:51.371704 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:39:51.371804 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:39:51.371820 | orchestrator | 2025-04-05 12:39:51.371834 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:39:51.371849 | orchestrator | Saturday 05 April 2025 12:38:32 +0000 (0:00:00.374) 0:00:00.520 ******** 2025-04-05 12:39:51.371863 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-04-05 12:39:51.371877 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-04-05 12:39:51.371891 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-04-05 12:39:51.371905 | orchestrator | 2025-04-05 12:39:51.371919 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-04-05 12:39:51.371933 | orchestrator | 2025-04-05 12:39:51.371947 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-04-05 12:39:51.371962 | orchestrator | Saturday 05 April 2025 12:38:33 +0000 (0:00:00.575) 0:00:01.096 ******** 2025-04-05 12:39:51.371976 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:39:51.372054 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:39:51.372439 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:39:51.372460 | orchestrator | 2025-04-05 12:39:51.372477 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:39:51.372493 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:39:51.372509 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:39:51.372524 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:39:51.372538 | orchestrator | 2025-04-05 12:39:51.372552 | orchestrator | 2025-04-05 12:39:51.372566 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:39:51.372581 | orchestrator | Saturday 05 April 2025 12:38:33 +0000 (0:00:00.609) 0:00:01.705 ******** 2025-04-05 12:39:51.372595 | orchestrator | =============================================================================== 2025-04-05 12:39:51.372623 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.61s 2025-04-05 12:39:51.372638 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-04-05 12:39:51.372652 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2025-04-05 12:39:51.372666 | orchestrator | 2025-04-05 12:39:51.372680 | orchestrator | None 2025-04-05 12:39:51.372704 | orchestrator | 2025-04-05 12:39:51.372719 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:39:51.372773 | orchestrator | 2025-04-05 12:39:51.372787 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:39:51.372801 | orchestrator | Saturday 05 April 2025 12:35:36 +0000 (0:00:00.352) 0:00:00.352 ******** 2025-04-05 12:39:51.372815 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:39:51.372939 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:39:51.372955 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:39:51.372969 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:39:51.372983 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:39:51.372998 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:39:51.373325 | orchestrator | 2025-04-05 12:39:51.373348 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:39:51.373363 | orchestrator | Saturday 05 April 2025 12:35:36 +0000 (0:00:00.848) 0:00:01.201 ******** 2025-04-05 12:39:51.373377 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-04-05 12:39:51.373399 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-04-05 12:39:51.373413 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-04-05 12:39:51.373428 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-04-05 12:39:51.373442 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-04-05 12:39:51.373456 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-04-05 12:39:51.373470 | orchestrator | 2025-04-05 12:39:51.373484 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-04-05 12:39:51.373499 | orchestrator | 2025-04-05 12:39:51.373512 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-05 12:39:51.373526 | orchestrator | Saturday 05 April 2025 12:35:37 +0000 (0:00:00.655) 0:00:01.856 ******** 2025-04-05 12:39:51.373541 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:39:51.373556 | orchestrator | 2025-04-05 12:39:51.373570 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-04-05 12:39:51.373583 | orchestrator | Saturday 05 April 2025 12:35:38 +0000 (0:00:01.091) 0:00:02.948 ******** 2025-04-05 12:39:51.373597 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:39:51.373612 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:39:51.373626 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:39:51.373652 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:39:51.373666 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:39:51.373680 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:39:51.373699 | orchestrator | 2025-04-05 12:39:51.373714 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-04-05 12:39:51.373749 | orchestrator | Saturday 05 April 2025 12:35:39 +0000 (0:00:01.171) 0:00:04.119 ******** 2025-04-05 12:39:51.373764 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:39:51.373778 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:39:51.373797 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:39:51.373812 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:39:51.373826 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:39:51.373839 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:39:51.373853 | orchestrator | 2025-04-05 12:39:51.373868 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-04-05 12:39:51.374490 | orchestrator | Saturday 05 April 2025 12:35:40 +0000 (0:00:00.977) 0:00:05.097 ******** 2025-04-05 12:39:51.374510 | orchestrator | ok: [testbed-node-0] => { 2025-04-05 12:39:51.374525 | orchestrator |  "changed": false, 2025-04-05 12:39:51.374539 | orchestrator |  "msg": "All assertions passed" 2025-04-05 12:39:51.374553 | orchestrator | } 2025-04-05 12:39:51.374567 | orchestrator | ok: [testbed-node-1] => { 2025-04-05 12:39:51.374581 | orchestrator |  "changed": false, 2025-04-05 12:39:51.374595 | orchestrator |  "msg": "All assertions passed" 2025-04-05 12:39:51.374608 | orchestrator | } 2025-04-05 12:39:51.374622 | orchestrator | ok: [testbed-node-2] => { 2025-04-05 12:39:51.374636 | orchestrator |  "changed": false, 2025-04-05 12:39:51.374650 | orchestrator |  "msg": "All assertions passed" 2025-04-05 12:39:51.374664 | orchestrator | } 2025-04-05 12:39:51.374677 | orchestrator | ok: [testbed-node-3] => { 2025-04-05 12:39:51.374691 | orchestrator |  "changed": false, 2025-04-05 12:39:51.374705 | orchestrator |  "msg": "All assertions passed" 2025-04-05 12:39:51.374719 | orchestrator | } 2025-04-05 12:39:51.374798 | orchestrator | ok: [testbed-node-4] => { 2025-04-05 12:39:51.374813 | orchestrator |  "changed": false, 2025-04-05 12:39:51.374827 | orchestrator |  "msg": "All assertions passed" 2025-04-05 12:39:51.374841 | orchestrator | } 2025-04-05 12:39:51.374855 | orchestrator | ok: [testbed-node-5] => { 2025-04-05 12:39:51.374869 | orchestrator |  "changed": false, 2025-04-05 12:39:51.374882 | orchestrator |  "msg": "All assertions passed" 2025-04-05 12:39:51.374896 | orchestrator | } 2025-04-05 12:39:51.374910 | orchestrator | 2025-04-05 12:39:51.374924 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-04-05 12:39:51.374938 | orchestrator | Saturday 05 April 2025 12:35:41 +0000 (0:00:00.624) 0:00:05.721 ******** 2025-04-05 12:39:51.374952 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.374965 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.374979 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.374993 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.375080 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.375101 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.375117 | orchestrator | 2025-04-05 12:39:51.375135 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-04-05 12:39:51.375152 | orchestrator | Saturday 05 April 2025 12:35:42 +0000 (0:00:00.534) 0:00:06.256 ******** 2025-04-05 12:39:51.375169 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-04-05 12:39:51.375185 | orchestrator | 2025-04-05 12:39:51.375201 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-04-05 12:39:51.375217 | orchestrator | Saturday 05 April 2025 12:35:45 +0000 (0:00:03.191) 0:00:09.448 ******** 2025-04-05 12:39:51.375234 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-04-05 12:39:51.375252 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-04-05 12:39:51.375579 | orchestrator | 2025-04-05 12:39:51.375680 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-04-05 12:39:51.375699 | orchestrator | Saturday 05 April 2025 12:35:51 +0000 (0:00:06.311) 0:00:15.760 ******** 2025-04-05 12:39:51.375712 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-05 12:39:51.375746 | orchestrator | 2025-04-05 12:39:51.375759 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-04-05 12:39:51.375772 | orchestrator | Saturday 05 April 2025 12:35:54 +0000 (0:00:03.065) 0:00:18.825 ******** 2025-04-05 12:39:51.375785 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-05 12:39:51.375797 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-04-05 12:39:51.375810 | orchestrator | 2025-04-05 12:39:51.375823 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-04-05 12:39:51.375835 | orchestrator | Saturday 05 April 2025 12:35:58 +0000 (0:00:03.722) 0:00:22.547 ******** 2025-04-05 12:39:51.375848 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-05 12:39:51.375860 | orchestrator | 2025-04-05 12:39:51.375873 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-04-05 12:39:51.375885 | orchestrator | Saturday 05 April 2025 12:36:01 +0000 (0:00:03.169) 0:00:25.716 ******** 2025-04-05 12:39:51.375898 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-04-05 12:39:51.375910 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-04-05 12:39:51.375922 | orchestrator | 2025-04-05 12:39:51.375942 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-05 12:39:51.375954 | orchestrator | Saturday 05 April 2025 12:36:09 +0000 (0:00:07.601) 0:00:33.317 ******** 2025-04-05 12:39:51.375967 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.375979 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.375992 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.376004 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.376016 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.376029 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.376041 | orchestrator | 2025-04-05 12:39:51.376054 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-04-05 12:39:51.376066 | orchestrator | Saturday 05 April 2025 12:36:09 +0000 (0:00:00.598) 0:00:33.916 ******** 2025-04-05 12:39:51.376079 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.376091 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.376104 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.376116 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.376128 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.376141 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.376153 | orchestrator | 2025-04-05 12:39:51.376165 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-04-05 12:39:51.376178 | orchestrator | Saturday 05 April 2025 12:36:13 +0000 (0:00:03.838) 0:00:37.754 ******** 2025-04-05 12:39:51.376190 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:39:51.376203 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:39:51.376289 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:39:51.376304 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:39:51.376318 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:39:51.376332 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:39:51.376346 | orchestrator | 2025-04-05 12:39:51.376360 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-04-05 12:39:51.376375 | orchestrator | Saturday 05 April 2025 12:36:14 +0000 (0:00:01.256) 0:00:39.010 ******** 2025-04-05 12:39:51.376389 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.376403 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.376704 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.376741 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.376755 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.376768 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.376790 | orchestrator | 2025-04-05 12:39:51.376803 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-04-05 12:39:51.376816 | orchestrator | Saturday 05 April 2025 12:36:18 +0000 (0:00:03.269) 0:00:42.279 ******** 2025-04-05 12:39:51.376832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.376920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.376941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.376955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.376968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.376990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.377105 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.377124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.377456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.377478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.377509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.377534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.377547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.377623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.377642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.377655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.377670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.377698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.377743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.377758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.377835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.377854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.377868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.377881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.377903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.377931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.378265 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-05 12:39:51.378297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.378312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.378336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.378350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.378644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.378796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.378819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.378833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.378857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.378871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.379033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.379149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.379164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.379412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.379425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.379449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.379520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.379537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.379548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.379566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.379576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.379587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.379608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.379720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.379755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.379775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.379786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.379808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.380066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.380136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.380152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.380163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.380201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.380212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.380235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.380298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.380313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.380359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.380383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.380395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.380406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.380464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.380479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.380500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.380701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.380715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.380746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.380828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.380932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.380946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.380957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.380981 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.380992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.381055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.381078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.381089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.381099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.381121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.381132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.381192 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-05 12:39:51.381215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.381226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.381236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.381468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.381496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.381508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.381826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.381900 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-05 12:39:51.381914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.381926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.381953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.381965 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.382072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.382091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.382119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.382130 | orchestrator | 2025-04-05 12:39:51.382141 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-04-05 12:39:51.382152 | orchestrator | Saturday 05 April 2025 12:36:21 +0000 (0:00:03.193) 0:00:45.473 ******** 2025-04-05 12:39:51.383834 | orchestrator | [WARNING]: Skipped 2025-04-05 12:39:51.383859 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-04-05 12:39:51.383869 | orchestrator | due to this access issue: 2025-04-05 12:39:51.383878 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-04-05 12:39:51.383886 | orchestrator | a directory 2025-04-05 12:39:51.383895 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-05 12:39:51.383904 | orchestrator | 2025-04-05 12:39:51.383912 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-05 12:39:51.383921 | orchestrator | Saturday 05 April 2025 12:36:21 +0000 (0:00:00.609) 0:00:46.082 ******** 2025-04-05 12:39:51.383930 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:39:51.383940 | orchestrator | 2025-04-05 12:39:51.383948 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-04-05 12:39:51.383957 | orchestrator | Saturday 05 April 2025 12:36:23 +0000 (0:00:01.439) 0:00:47.521 ******** 2025-04-05 12:39:51.383966 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-05 12:39:51.385668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.386219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.386316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.386380 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-05 12:39:51.386399 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-05 12:39:51.386435 | orchestrator | 2025-04-05 12:39:51.386452 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-04-05 12:39:51.386468 | orchestrator | Saturday 05 April 2025 12:36:27 +0000 (0:00:04.194) 0:00:51.716 ******** 2025-04-05 12:39:51.386500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.386517 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.386532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.386547 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.386575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.386591 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.386606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.386629 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.386644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.386659 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.386684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.386708 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.386760 | orchestrator | 2025-04-05 12:39:51.386784 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-04-05 12:39:51.386801 | orchestrator | Saturday 05 April 2025 12:36:30 +0000 (0:00:02.949) 0:00:54.666 ******** 2025-04-05 12:39:51.386832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.386848 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.386863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.386886 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.386901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.386915 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.386941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.386956 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.386970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.386985 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.387011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.387026 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.387040 | orchestrator | 2025-04-05 12:39:51.387054 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-04-05 12:39:51.387075 | orchestrator | Saturday 05 April 2025 12:36:33 +0000 (0:00:03.514) 0:00:58.180 ******** 2025-04-05 12:39:51.387089 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.387103 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.387117 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.387131 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.387145 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.387158 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.387173 | orchestrator | 2025-04-05 12:39:51.387187 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-04-05 12:39:51.387201 | orchestrator | Saturday 05 April 2025 12:36:37 +0000 (0:00:03.918) 0:01:02.099 ******** 2025-04-05 12:39:51.387215 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.387229 | orchestrator | 2025-04-05 12:39:51.387243 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-04-05 12:39:51.387257 | orchestrator | Saturday 05 April 2025 12:36:37 +0000 (0:00:00.097) 0:01:02.196 ******** 2025-04-05 12:39:51.387270 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.387284 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.387298 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.387311 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.387325 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.387339 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.387352 | orchestrator | 2025-04-05 12:39:51.387366 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-04-05 12:39:51.387380 | orchestrator | Saturday 05 April 2025 12:36:38 +0000 (0:00:00.876) 0:01:03.072 ******** 2025-04-05 12:39:51.387394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.387417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.387433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.387455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.387469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.387484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.387504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.387527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.387554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.387569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.387591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.387606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.387620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.387635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.387668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.387691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.387706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.387740 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.387755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.387770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.387808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.387824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.387846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.387861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.387876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.387890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.387922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.387939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.387961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.387975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.387990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.388005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.388060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.388089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.388127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.388148 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.388163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.388178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.388192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.388229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.388251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.388266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.388281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.388296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.388321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.388343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.388366 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.388381 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.388395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.388410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.388435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.388450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.388478 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.388493 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.388508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.388522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.388537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.388563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.388592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.388608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.389075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.389106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.389132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.389180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.389292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.389350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.389377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.389398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.389431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.389447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.389462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.389484 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.389601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.389633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.389658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.389699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.389716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.389887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.389915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.389941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.389970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.390003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.390050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.390077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.390176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.390208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.390234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.390258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.390296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.390320 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.390416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.390442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.390466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.390491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.390512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.390551 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.390567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.390662 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.390692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.390717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.390780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.390806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.390821 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.390916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.390940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.390965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.391007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.391031 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.391045 | orchestrator | 2025-04-05 12:39:51.391060 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-04-05 12:39:51.391074 | orchestrator | Saturday 05 April 2025 12:36:43 +0000 (0:00:04.622) 0:01:07.695 ******** 2025-04-05 12:39:51.391089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.391192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.391224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.391250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.391270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.391309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.391326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.391340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.391433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.391464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.391507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.391534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.391549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.391643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.391674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.391718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.391773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.391789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.391804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.391899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.391928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.391953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.391990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.392026 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.392041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.392056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.392148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.392179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.392203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.392249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.392266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.392280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.392376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.392408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.392434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.392479 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-05 12:39:51.392496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.392510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.392602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.392633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.392659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.392704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.392744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.392772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.392868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.392898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.392940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.392956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.392986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.393007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.393102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.393127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.393184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.393203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.393218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.393232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.393247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.393338 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-05 12:39:51.393398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.393423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.393438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.393453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.393468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.393561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.393603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.393646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.393663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.393678 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.393838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.393887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.393927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.393944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.393958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.393973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.394146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.394195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.394235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.394250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.394264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.394277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.394363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.394400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.394440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.394463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.394477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.394490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.394574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.394616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.394657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.394673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.394686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.394699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.394713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.394824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.394868 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-05 12:39:51.394893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.394908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.394921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.394934 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.395026 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.395070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.395096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.395111 | orchestrator | 2025-04-05 12:39:51.395124 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-04-05 12:39:51.395137 | orchestrator | Saturday 05 April 2025 12:36:48 +0000 (0:00:05.062) 0:01:12.757 ******** 2025-04-05 12:39:51.395149 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.395162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.395252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.395296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.395322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.395343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.395364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.395378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.395399 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.395503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.395533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.395562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.395577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.395598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.395699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.395751 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.395776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.395799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.395836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.395921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.395945 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.395967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.395988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.396002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.396016 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.396041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.396054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.396154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.396183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.396205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.396219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.396240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.396321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.396346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.396367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.396407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.396423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.396444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.396458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.396538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.396560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.396604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.396626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.396656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.396669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.396811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.396857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.396891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.396916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.396930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.396943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.397029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.397056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.397097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.397121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.397135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.397148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.397228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.397271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.397296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.397321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.397332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.397343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.397423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.397447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.397475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.397493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.397508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.397520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.397610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.397634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.397653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.397679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.397697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.397714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.397825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.397849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.397869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.397888 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-05 12:39:51.397899 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.397909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.397920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.397987 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.398050 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.398076 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.398087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.398098 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-05 12:39:51.398178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.398200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.398217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.398244 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-05 12:39:51.398262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.398283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.398295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.398363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.398392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.398413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.398432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.398450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.398482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.398554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.398582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.398600 | orchestrator | 2025-04-05 12:39:51.398617 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-04-05 12:39:51.398634 | orchestrator | Saturday 05 April 2025 12:36:55 +0000 (0:00:07.466) 0:01:20.224 ******** 2025-04-05 12:39:51.398651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.398669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.398686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.398795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.398829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.398850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.398865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.398875 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.398897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.398909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.398989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.399010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.399040 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.399059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.399077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.399088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.399164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.399184 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.399216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.399235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.399248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.399259 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.399333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.399354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.399371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.399389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.399406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.399437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.399450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.399525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.399544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.399563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.399582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.399613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.399639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.399775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.399803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.399821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.399839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.399855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.399971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.399994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.400009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.400024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.400048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.400064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.400086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.400161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.400178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.400194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.400208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.400235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.400257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.400331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.400349 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.400364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.400379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.400417 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.400432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.400504 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.400521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.400537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.400563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.400578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.400600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.400614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.400688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.400715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.400747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.400762 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.400776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.400799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.400873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.400890 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.400905 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.400931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.400953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.400967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.401042 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.401059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.401086 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.401100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.401121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.401136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.401208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.401225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.401240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.401255 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.401280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.401302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.401316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.401390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.401407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.401422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.401444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.401459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.401484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.401557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.401586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.401602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.401616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.401643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.401658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.401672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.401761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.401781 | orchestrator | 2025-04-05 12:39:51.401793 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-04-05 12:39:51.401808 | orchestrator | Saturday 05 April 2025 12:36:58 +0000 (0:00:02.596) 0:01:22.821 ******** 2025-04-05 12:39:51.401821 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:39:51.401833 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.401842 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.401851 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.401860 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:39:51.401868 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:39:51.401883 | orchestrator | 2025-04-05 12:39:51.401892 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-04-05 12:39:51.401900 | orchestrator | Saturday 05 April 2025 12:37:03 +0000 (0:00:04.837) 0:01:27.658 ******** 2025-04-05 12:39:51.401920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.401930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.401939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.401995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.402046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402067 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.402077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.402086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.402151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.402179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.402188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.402216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.402269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402281 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.402296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.402305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.402405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.402424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.402442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402451 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.402460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.402530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.402539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.402566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.402575 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402584 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.402637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.402655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.402820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.402859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.402878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.402897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.402919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.402976 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.402998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.403007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.403015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403024 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.403032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.403094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.403107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.403216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.403243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.403256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.403309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.403343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.403351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.403420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.403433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.403450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.403458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.403542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.403552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.403561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.403578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.403651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.403659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.403676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.403779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.403796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.403809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.403874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.403894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.403903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.403933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.403982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.403993 | orchestrator | 2025-04-05 12:39:51.404002 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-04-05 12:39:51.404010 | orchestrator | Saturday 05 April 2025 12:37:07 +0000 (0:00:04.550) 0:01:32.209 ******** 2025-04-05 12:39:51.404018 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.404026 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.404034 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.404046 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.404054 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.404062 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.404070 | orchestrator | 2025-04-05 12:39:51.404078 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-04-05 12:39:51.404086 | orchestrator | Saturday 05 April 2025 12:37:09 +0000 (0:00:01.936) 0:01:34.145 ******** 2025-04-05 12:39:51.404094 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.404102 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.404110 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.404118 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.404126 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.404134 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.404142 | orchestrator | 2025-04-05 12:39:51.404150 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-04-05 12:39:51.404163 | orchestrator | Saturday 05 April 2025 12:37:11 +0000 (0:00:02.083) 0:01:36.229 ******** 2025-04-05 12:39:51.404171 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.404179 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.404187 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.404195 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.404203 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.404211 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.404219 | orchestrator | 2025-04-05 12:39:51.404226 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-04-05 12:39:51.404235 | orchestrator | Saturday 05 April 2025 12:37:14 +0000 (0:00:02.934) 0:01:39.163 ******** 2025-04-05 12:39:51.404242 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.404250 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.404258 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.404266 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.404274 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.404282 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.404290 | orchestrator | 2025-04-05 12:39:51.404298 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-04-05 12:39:51.404306 | orchestrator | Saturday 05 April 2025 12:37:16 +0000 (0:00:02.036) 0:01:41.200 ******** 2025-04-05 12:39:51.404314 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.404322 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.404330 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.404338 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.404346 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.404354 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.404362 | orchestrator | 2025-04-05 12:39:51.404370 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-04-05 12:39:51.404378 | orchestrator | Saturday 05 April 2025 12:37:19 +0000 (0:00:02.358) 0:01:43.559 ******** 2025-04-05 12:39:51.404386 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.404394 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.404402 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.404409 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.404417 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.404425 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.404433 | orchestrator | 2025-04-05 12:39:51.404441 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-04-05 12:39:51.404449 | orchestrator | Saturday 05 April 2025 12:37:21 +0000 (0:00:02.162) 0:01:45.722 ******** 2025-04-05 12:39:51.404457 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-05 12:39:51.404466 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.404474 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-05 12:39:51.404482 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.404490 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-05 12:39:51.404498 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.404509 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-05 12:39:51.404517 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.404525 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-05 12:39:51.404533 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.404541 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-05 12:39:51.404549 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.404557 | orchestrator | 2025-04-05 12:39:51.404564 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-04-05 12:39:51.404572 | orchestrator | Saturday 05 April 2025 12:37:23 +0000 (0:00:01.741) 0:01:47.463 ******** 2025-04-05 12:39:51.404625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.404637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.404645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.404654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.404663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.404713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.404745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.404755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.404764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.404772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.404781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.404842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.404858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.404868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.404876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.404885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.404893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.404913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.404961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.404973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.404982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.404990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.404999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.405007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.405080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.405097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.405106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.405174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.405186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.405203 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.405211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405219 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.405236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.405251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.405328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.405350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.405406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.405427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.405444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.405457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.405524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.405533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.405549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405581 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.405636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.405657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.405679 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.405695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.405770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405779 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.405787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.405801 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.405867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.405878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405887 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.405895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.405904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405935 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.405983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.405995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.406060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.406079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.406142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.406162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.406171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.406202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.406252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406263 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.406272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.406281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.406368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.406388 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.406410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.406428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.406485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.406496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.406526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.406535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406543 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.406551 | orchestrator | 2025-04-05 12:39:51.406559 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-04-05 12:39:51.406568 | orchestrator | Saturday 05 April 2025 12:37:26 +0000 (0:00:02.945) 0:01:50.408 ******** 2025-04-05 12:39:51.406616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.406627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.406674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.406777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.406791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.406819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.406836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.406890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.406924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.406933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.406942 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.406950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.406999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.407016 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407025 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407042 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.407116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.407143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.407151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407168 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.407217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.407251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.407260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.407325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.407335 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.407350 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.407365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.407380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.407387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.407454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.407470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.407477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.407484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.407563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.407579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.407644 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.407651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.407673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.407681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.407760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.407788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.407795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.407815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.407859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407869 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.407884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.407892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.407965 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.407983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.407991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.407998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.408010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.408017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.408060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.408070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.408078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.408095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.408108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.408115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.408138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.408146 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.408154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.408167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.408175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.408187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.408209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.408217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.408232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.408240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.408247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.408259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.408267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.408290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.408298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.408312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.408320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.408331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.408339 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.408346 | orchestrator | 2025-04-05 12:39:51.408353 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-04-05 12:39:51.408361 | orchestrator | Saturday 05 April 2025 12:37:29 +0000 (0:00:03.199) 0:01:53.607 ******** 2025-04-05 12:39:51.408368 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.408376 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.408384 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.408391 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.408398 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.408405 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.408412 | orchestrator | 2025-04-05 12:39:51.408419 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-04-05 12:39:51.408426 | orchestrator | Saturday 05 April 2025 12:37:31 +0000 (0:00:02.590) 0:01:56.197 ******** 2025-04-05 12:39:51.408433 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.408440 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.408446 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.408453 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:39:51.408460 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:39:51.408467 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:39:51.408474 | orchestrator | 2025-04-05 12:39:51.408481 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-04-05 12:39:51.408488 | orchestrator | Saturday 05 April 2025 12:37:37 +0000 (0:00:05.129) 0:02:01.327 ******** 2025-04-05 12:39:51.408495 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.408502 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.408509 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.408534 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.408542 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.408549 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.408557 | orchestrator | 2025-04-05 12:39:51.408564 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-04-05 12:39:51.408571 | orchestrator | Saturday 05 April 2025 12:37:39 +0000 (0:00:02.406) 0:02:03.733 ******** 2025-04-05 12:39:51.408578 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.408585 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.408592 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.408598 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.408606 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.408613 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.408619 | orchestrator | 2025-04-05 12:39:51.408627 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-04-05 12:39:51.408634 | orchestrator | Saturday 05 April 2025 12:37:42 +0000 (0:00:02.725) 0:02:06.459 ******** 2025-04-05 12:39:51.408645 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.408652 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.408659 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.408666 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.408673 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.408680 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.408687 | orchestrator | 2025-04-05 12:39:51.408694 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-04-05 12:39:51.408701 | orchestrator | Saturday 05 April 2025 12:37:44 +0000 (0:00:01.991) 0:02:08.450 ******** 2025-04-05 12:39:51.408708 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.408715 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.408737 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.408744 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.408751 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.408758 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.408765 | orchestrator | 2025-04-05 12:39:51.408772 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-04-05 12:39:51.408779 | orchestrator | Saturday 05 April 2025 12:37:46 +0000 (0:00:02.470) 0:02:10.921 ******** 2025-04-05 12:39:51.408786 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.408793 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.408800 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.408807 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.408814 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.408821 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.408828 | orchestrator | 2025-04-05 12:39:51.408835 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-04-05 12:39:51.408842 | orchestrator | Saturday 05 April 2025 12:37:48 +0000 (0:00:02.107) 0:02:13.029 ******** 2025-04-05 12:39:51.408849 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.408856 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.408863 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.408870 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.408877 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.408884 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.408891 | orchestrator | 2025-04-05 12:39:51.408898 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-04-05 12:39:51.408905 | orchestrator | Saturday 05 April 2025 12:37:51 +0000 (0:00:03.076) 0:02:16.105 ******** 2025-04-05 12:39:51.408912 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.408919 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.408926 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.408936 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.408944 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.408950 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.408957 | orchestrator | 2025-04-05 12:39:51.408964 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-04-05 12:39:51.408971 | orchestrator | Saturday 05 April 2025 12:37:53 +0000 (0:00:01.835) 0:02:17.941 ******** 2025-04-05 12:39:51.408978 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.408985 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.408992 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.408999 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.409006 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.409013 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.409020 | orchestrator | 2025-04-05 12:39:51.409027 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-04-05 12:39:51.409034 | orchestrator | Saturday 05 April 2025 12:37:56 +0000 (0:00:02.708) 0:02:20.649 ******** 2025-04-05 12:39:51.409041 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-05 12:39:51.409052 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.409059 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-05 12:39:51.409066 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.409073 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-05 12:39:51.409080 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.409087 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-05 12:39:51.409094 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.409101 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-05 12:39:51.409108 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.409115 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-05 12:39:51.409122 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.409129 | orchestrator | 2025-04-05 12:39:51.409136 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-04-05 12:39:51.409143 | orchestrator | Saturday 05 April 2025 12:37:59 +0000 (0:00:02.685) 0:02:23.334 ******** 2025-04-05 12:39:51.409167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.409175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.409232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.409247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.409255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.409282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.409312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.409321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.409342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.409354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409361 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.409384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.409392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.409433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.409466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.409474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.409500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.409514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.409537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.409559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.409571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409578 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.409585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.409607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.409648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.409663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.409686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.409707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.409741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.409749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.409933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.409942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.409953 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.409961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.409969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.409993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.410080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.410107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.410114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.410122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.410164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.410171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.410186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.410193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.410200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.410242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.410250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.410257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410293 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.410304 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.410312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.410319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410326 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.410334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.410356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410370 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.410378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.410385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.410433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.410450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.410457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.410472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.410506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.410515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.410530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.410537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410544 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.410552 | orchestrator | 2025-04-05 12:39:51.410559 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-04-05 12:39:51.410566 | orchestrator | Saturday 05 April 2025 12:38:01 +0000 (0:00:02.166) 0:02:25.501 ******** 2025-04-05 12:39:51.410592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.410601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.410649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.410658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410673 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.410680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.410699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.410772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410779 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.410794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.410801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-05 12:39:51.410825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.410862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.410870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.410892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.410899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410910 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.410936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.410951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.410962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.410980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.410987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.410995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.411002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.411021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.411031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.411046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.411085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-05 12:39:51.411100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.411111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.411129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.411159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-05 12:39:51.411165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.411187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.411194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.411204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.411210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.411233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.411240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.411251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.411273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.411280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.411297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.411303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411310 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-05 12:39:51.411319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411326 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.411332 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.411343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411349 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-05 12:39:51.411356 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.411375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.411382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.411398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.411405 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.411414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.411423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411440 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-05 12:39:51.411446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:39:51.411459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:39:51.411468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411475 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-05 12:39:51.411485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-05 12:39:51.411492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-05 12:39:51.411499 | orchestrator | 2025-04-05 12:39:51.411505 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-05 12:39:51.411512 | orchestrator | Saturday 05 April 2025 12:38:03 +0000 (0:00:02.715) 0:02:28.216 ******** 2025-04-05 12:39:51.411518 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:39:51.411524 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:39:51.411530 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:39:51.411537 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:39:51.411543 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:39:51.411549 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:39:51.411555 | orchestrator | 2025-04-05 12:39:51.411562 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-04-05 12:39:51.411568 | orchestrator | Saturday 05 April 2025 12:38:04 +0000 (0:00:00.632) 0:02:28.849 ******** 2025-04-05 12:39:51.411574 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:39:51.411580 | orchestrator | 2025-04-05 12:39:51.411587 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-04-05 12:39:51.411593 | orchestrator | Saturday 05 April 2025 12:38:06 +0000 (0:00:02.070) 0:02:30.920 ******** 2025-04-05 12:39:51.411599 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:39:51.411605 | orchestrator | 2025-04-05 12:39:51.411612 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-04-05 12:39:51.411618 | orchestrator | Saturday 05 April 2025 12:38:08 +0000 (0:00:02.173) 0:02:33.094 ******** 2025-04-05 12:39:51.411624 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:39:51.411630 | orchestrator | 2025-04-05 12:39:51.411637 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-05 12:39:51.411643 | orchestrator | Saturday 05 April 2025 12:38:42 +0000 (0:00:33.250) 0:03:06.344 ******** 2025-04-05 12:39:51.411649 | orchestrator | 2025-04-05 12:39:51.411659 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-05 12:39:51.411668 | orchestrator | Saturday 05 April 2025 12:38:42 +0000 (0:00:00.056) 0:03:06.401 ******** 2025-04-05 12:39:51.411674 | orchestrator | 2025-04-05 12:39:51.411680 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-05 12:39:51.411686 | orchestrator | Saturday 05 April 2025 12:38:42 +0000 (0:00:00.065) 0:03:06.466 ******** 2025-04-05 12:39:51.411692 | orchestrator | 2025-04-05 12:39:51.411699 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-05 12:39:51.411705 | orchestrator | Saturday 05 April 2025 12:38:42 +0000 (0:00:00.058) 0:03:06.524 ******** 2025-04-05 12:39:51.411711 | orchestrator | 2025-04-05 12:39:51.411717 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-05 12:39:51.411738 | orchestrator | Saturday 05 April 2025 12:38:42 +0000 (0:00:00.055) 0:03:06.580 ******** 2025-04-05 12:39:51.411745 | orchestrator | 2025-04-05 12:39:51.411751 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-05 12:39:51.411758 | orchestrator | Saturday 05 April 2025 12:38:42 +0000 (0:00:00.282) 0:03:06.862 ******** 2025-04-05 12:39:51.411764 | orchestrator | 2025-04-05 12:39:51.411770 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-04-05 12:39:51.411777 | orchestrator | Saturday 05 April 2025 12:38:42 +0000 (0:00:00.056) 0:03:06.918 ******** 2025-04-05 12:39:51.411783 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:39:51.411789 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:39:51.411795 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:39:51.411802 | orchestrator | 2025-04-05 12:39:51.411808 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-04-05 12:39:51.411814 | orchestrator | Saturday 05 April 2025 12:39:01 +0000 (0:00:18.738) 0:03:25.657 ******** 2025-04-05 12:39:51.411820 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:39:51.411827 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:39:51.411833 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:39:51.411839 | orchestrator | 2025-04-05 12:39:51.411845 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:39:51.411852 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-05 12:39:51.411859 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-04-05 12:39:51.411865 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-04-05 12:39:51.411875 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-04-05 12:39:51.411881 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-04-05 12:39:51.411888 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-04-05 12:39:51.411894 | orchestrator | 2025-04-05 12:39:51.411900 | orchestrator | 2025-04-05 12:39:51.411907 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:39:51.411913 | orchestrator | Saturday 05 April 2025 12:39:50 +0000 (0:00:49.326) 0:04:14.984 ******** 2025-04-05 12:39:51.411919 | orchestrator | =============================================================================== 2025-04-05 12:39:51.411925 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 49.33s 2025-04-05 12:39:51.411932 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 33.25s 2025-04-05 12:39:51.411938 | orchestrator | neutron : Restart neutron-server container ----------------------------- 18.74s 2025-04-05 12:39:51.411948 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.60s 2025-04-05 12:39:51.411954 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.47s 2025-04-05 12:39:51.411960 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.31s 2025-04-05 12:39:51.411967 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.13s 2025-04-05 12:39:51.411973 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.06s 2025-04-05 12:39:51.411979 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.84s 2025-04-05 12:39:51.411985 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.62s 2025-04-05 12:39:51.411992 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.55s 2025-04-05 12:39:51.411998 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.19s 2025-04-05 12:39:51.412004 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.92s 2025-04-05 12:39:51.412010 | orchestrator | Load and persist kernel modules ----------------------------------------- 3.84s 2025-04-05 12:39:51.412016 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.72s 2025-04-05 12:39:51.412025 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.51s 2025-04-05 12:39:51.412031 | orchestrator | Setting sysctl values --------------------------------------------------- 3.27s 2025-04-05 12:39:51.412038 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 3.20s 2025-04-05 12:39:51.412044 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.19s 2025-04-05 12:39:51.412053 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.19s 2025-04-05 12:39:54.432601 | orchestrator | 2025-04-05 12:39:51 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:39:54.432779 | orchestrator | 2025-04-05 12:39:54 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:39:54.433575 | orchestrator | 2025-04-05 12:39:54 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:39:54.433608 | orchestrator | 2025-04-05 12:39:54 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:39:54.434293 | orchestrator | 2025-04-05 12:39:54 | INFO  | Task 1828efe6-0f3f-4687-920a-0811f24bb6ae is in state STARTED 2025-04-05 12:39:54.434378 | orchestrator | 2025-04-05 12:39:54 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:39:57.463681 | orchestrator | 2025-04-05 12:39:57 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:39:57.464366 | orchestrator | 2025-04-05 12:39:57 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:39:57.466130 | orchestrator | 2025-04-05 12:39:57 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:39:57.467783 | orchestrator | 2025-04-05 12:39:57 | INFO  | Task 1828efe6-0f3f-4687-920a-0811f24bb6ae is in state STARTED 2025-04-05 12:39:57.468519 | orchestrator | 2025-04-05 12:39:57 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:00.513475 | orchestrator | 2025-04-05 12:40:00 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:40:00.514263 | orchestrator | 2025-04-05 12:40:00 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:40:00.515454 | orchestrator | 2025-04-05 12:40:00 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:40:00.517228 | orchestrator | 2025-04-05 12:40:00 | INFO  | Task 1828efe6-0f3f-4687-920a-0811f24bb6ae is in state STARTED 2025-04-05 12:40:00.517432 | orchestrator | 2025-04-05 12:40:00 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:03.570154 | orchestrator | 2025-04-05 12:40:03 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:40:03.571308 | orchestrator | 2025-04-05 12:40:03 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:40:03.573208 | orchestrator | 2025-04-05 12:40:03 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:40:03.575195 | orchestrator | 2025-04-05 12:40:03 | INFO  | Task 1828efe6-0f3f-4687-920a-0811f24bb6ae is in state STARTED 2025-04-05 12:40:03.575366 | orchestrator | 2025-04-05 12:40:03 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:06.623320 | orchestrator | 2025-04-05 12:40:06 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:40:06.625811 | orchestrator | 2025-04-05 12:40:06 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:40:09.657907 | orchestrator | 2025-04-05 12:40:06 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:40:09.658094 | orchestrator | 2025-04-05 12:40:06 | INFO  | Task 1828efe6-0f3f-4687-920a-0811f24bb6ae is in state STARTED 2025-04-05 12:40:09.658772 | orchestrator | 2025-04-05 12:40:06 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:09.658813 | orchestrator | 2025-04-05 12:40:09 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:40:09.658988 | orchestrator | 2025-04-05 12:40:09 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:40:09.659109 | orchestrator | 2025-04-05 12:40:09 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:40:09.659807 | orchestrator | 2025-04-05 12:40:09 | INFO  | Task 1828efe6-0f3f-4687-920a-0811f24bb6ae is in state STARTED 2025-04-05 12:40:12.705590 | orchestrator | 2025-04-05 12:40:09 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:12.705773 | orchestrator | 2025-04-05 12:40:12 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:40:12.706657 | orchestrator | 2025-04-05 12:40:12 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state STARTED 2025-04-05 12:40:12.710761 | orchestrator | 2025-04-05 12:40:12 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:40:12.711965 | orchestrator | 2025-04-05 12:40:12 | INFO  | Task 1828efe6-0f3f-4687-920a-0811f24bb6ae is in state STARTED 2025-04-05 12:40:15.761510 | orchestrator | 2025-04-05 12:40:12 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:15.761638 | orchestrator | 2025-04-05 12:40:15 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:40:15.763004 | orchestrator | 2025-04-05 12:40:15 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:40:15.765259 | orchestrator | 2025-04-05 12:40:15 | INFO  | Task 9bf28c50-ca4c-48af-aa5a-59f20cefab54 is in state SUCCESS 2025-04-05 12:40:15.766970 | orchestrator | 2025-04-05 12:40:15.767006 | orchestrator | 2025-04-05 12:40:15.767021 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:40:15.767036 | orchestrator | 2025-04-05 12:40:15.767050 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:40:15.767064 | orchestrator | Saturday 05 April 2025 12:38:28 +0000 (0:00:00.216) 0:00:00.216 ******** 2025-04-05 12:40:15.767078 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:40:15.767094 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:40:15.767107 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:40:15.767121 | orchestrator | 2025-04-05 12:40:15.767136 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:40:15.767173 | orchestrator | Saturday 05 April 2025 12:38:28 +0000 (0:00:00.349) 0:00:00.565 ******** 2025-04-05 12:40:15.767188 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-04-05 12:40:15.767203 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-04-05 12:40:15.767217 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-04-05 12:40:15.767230 | orchestrator | 2025-04-05 12:40:15.767244 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-04-05 12:40:15.767257 | orchestrator | 2025-04-05 12:40:15.767271 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-04-05 12:40:15.767285 | orchestrator | Saturday 05 April 2025 12:38:29 +0000 (0:00:00.418) 0:00:00.984 ******** 2025-04-05 12:40:15.767407 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:40:15.767429 | orchestrator | 2025-04-05 12:40:15.767443 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-04-05 12:40:15.767458 | orchestrator | Saturday 05 April 2025 12:38:29 +0000 (0:00:00.552) 0:00:01.536 ******** 2025-04-05 12:40:15.767472 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-04-05 12:40:15.767486 | orchestrator | 2025-04-05 12:40:15.767500 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-04-05 12:40:15.767514 | orchestrator | Saturday 05 April 2025 12:38:32 +0000 (0:00:03.116) 0:00:04.653 ******** 2025-04-05 12:40:15.767529 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-04-05 12:40:15.767857 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-04-05 12:40:15.767881 | orchestrator | 2025-04-05 12:40:15.767899 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-04-05 12:40:15.767915 | orchestrator | Saturday 05 April 2025 12:38:38 +0000 (0:00:05.927) 0:00:10.580 ******** 2025-04-05 12:40:15.767931 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-05 12:40:15.767945 | orchestrator | 2025-04-05 12:40:15.767960 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-04-05 12:40:15.767973 | orchestrator | Saturday 05 April 2025 12:38:41 +0000 (0:00:02.944) 0:00:13.525 ******** 2025-04-05 12:40:15.767987 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-05 12:40:15.768001 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-04-05 12:40:15.768015 | orchestrator | 2025-04-05 12:40:15.768029 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-04-05 12:40:15.768043 | orchestrator | Saturday 05 April 2025 12:38:45 +0000 (0:00:03.488) 0:00:17.014 ******** 2025-04-05 12:40:15.768057 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-05 12:40:15.768071 | orchestrator | 2025-04-05 12:40:15.768085 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-04-05 12:40:15.768098 | orchestrator | Saturday 05 April 2025 12:38:48 +0000 (0:00:02.831) 0:00:19.845 ******** 2025-04-05 12:40:15.768112 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-04-05 12:40:15.768126 | orchestrator | 2025-04-05 12:40:15.768140 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-04-05 12:40:15.768154 | orchestrator | Saturday 05 April 2025 12:38:51 +0000 (0:00:03.603) 0:00:23.449 ******** 2025-04-05 12:40:15.768168 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:40:15.768241 | orchestrator | 2025-04-05 12:40:15.768259 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-04-05 12:40:15.768272 | orchestrator | Saturday 05 April 2025 12:38:54 +0000 (0:00:03.014) 0:00:26.464 ******** 2025-04-05 12:40:15.768286 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:40:15.768300 | orchestrator | 2025-04-05 12:40:15.768314 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-04-05 12:40:15.768327 | orchestrator | Saturday 05 April 2025 12:38:58 +0000 (0:00:03.520) 0:00:29.985 ******** 2025-04-05 12:40:15.768355 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:40:15.768368 | orchestrator | 2025-04-05 12:40:15.768382 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-04-05 12:40:15.768410 | orchestrator | Saturday 05 April 2025 12:39:01 +0000 (0:00:03.561) 0:00:33.547 ******** 2025-04-05 12:40:15.768442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:40:15.768462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:40:15.768478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:40:15.768575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:40:15.768594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:40:15.768631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:40:15.768648 | orchestrator | 2025-04-05 12:40:15.768663 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-04-05 12:40:15.768678 | orchestrator | Saturday 05 April 2025 12:39:04 +0000 (0:00:02.554) 0:00:36.101 ******** 2025-04-05 12:40:15.768693 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:40:15.768708 | orchestrator | 2025-04-05 12:40:15.768742 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-04-05 12:40:15.768757 | orchestrator | Saturday 05 April 2025 12:39:04 +0000 (0:00:00.299) 0:00:36.401 ******** 2025-04-05 12:40:15.768771 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:40:15.768785 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:40:15.768799 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:40:15.768813 | orchestrator | 2025-04-05 12:40:15.768827 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-04-05 12:40:15.768840 | orchestrator | Saturday 05 April 2025 12:39:05 +0000 (0:00:01.026) 0:00:37.428 ******** 2025-04-05 12:40:15.768854 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-05 12:40:15.768868 | orchestrator | 2025-04-05 12:40:15.768882 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-04-05 12:40:15.768896 | orchestrator | Saturday 05 April 2025 12:39:06 +0000 (0:00:01.090) 0:00:38.519 ******** 2025-04-05 12:40:15.768911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:40:15.768925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:40:15.768947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:40:15.768970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:40:15.768985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:40:15.769000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:40:15.769014 | orchestrator | 2025-04-05 12:40:15.769028 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-04-05 12:40:15.769043 | orchestrator | Saturday 05 April 2025 12:39:10 +0000 (0:00:03.789) 0:00:42.308 ******** 2025-04-05 12:40:15.769057 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:40:15.769071 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:40:15.769085 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:40:15.769099 | orchestrator | 2025-04-05 12:40:15.769121 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-04-05 12:40:15.769135 | orchestrator | Saturday 05 April 2025 12:39:11 +0000 (0:00:00.533) 0:00:42.841 ******** 2025-04-05 12:40:15.769150 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:40:15.769164 | orchestrator | 2025-04-05 12:40:15.769177 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-04-05 12:40:15.769191 | orchestrator | Saturday 05 April 2025 12:39:12 +0000 (0:00:01.716) 0:00:44.557 ******** 2025-04-05 12:40:15.769206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:40:15.769228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:40:15.769243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:40:15.769258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:40:15.769280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:40:15.769294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:40:15.769308 | orchestrator | 2025-04-05 12:40:15.769323 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-04-05 12:40:15.769337 | orchestrator | Saturday 05 April 2025 12:39:15 +0000 (0:00:02.829) 0:00:47.387 ******** 2025-04-05 12:40:15.769357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-05 12:40:15.769373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:40:15.769387 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:40:15.769410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-05 12:40:15.769432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:40:15.769446 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:40:15.769460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-05 12:40:15.769481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:40:15.769496 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:40:15.769510 | orchestrator | 2025-04-05 12:40:15.769524 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-04-05 12:40:15.769538 | orchestrator | Saturday 05 April 2025 12:39:17 +0000 (0:00:01.744) 0:00:49.132 ******** 2025-04-05 12:40:15.769553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-05 12:40:15.769574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:40:15.769589 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:40:15.769603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-05 12:40:15.769618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:40:15.769632 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:40:15.769660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-05 12:40:15.769676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:40:15.769697 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:40:15.769711 | orchestrator | 2025-04-05 12:40:15.769780 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-04-05 12:40:15.769797 | orchestrator | Saturday 05 April 2025 12:39:19 +0000 (0:00:02.478) 0:00:51.610 ******** 2025-04-05 12:40:15.769812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:40:15.769827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:40:15.769841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:40:15.769864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:40:15.769887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:40:15.769901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:40:15.769916 | orchestrator | 2025-04-05 12:40:15.769930 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-04-05 12:40:15.769944 | orchestrator | Saturday 05 April 2025 12:39:22 +0000 (0:00:02.957) 0:00:54.567 ******** 2025-04-05 12:40:15.769958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:40:15.769979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:40:15.769994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:40:15.770062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:40:15.770081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:40:15.770097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:40:15.770111 | orchestrator | 2025-04-05 12:40:15.770125 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-04-05 12:40:15.770139 | orchestrator | Saturday 05 April 2025 12:39:33 +0000 (0:00:10.448) 0:01:05.016 ******** 2025-04-05 12:40:15.770160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-05 12:40:15.770175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:40:15.770205 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:40:15.770219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-05 12:40:15.770234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:40:15.770249 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:40:15.770264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-05 12:40:15.770284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:40:15.770298 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:40:15.770319 | orchestrator | 2025-04-05 12:40:15.770333 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-04-05 12:40:15.770348 | orchestrator | Saturday 05 April 2025 12:39:34 +0000 (0:00:01.176) 0:01:06.193 ******** 2025-04-05 12:40:15.770363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:40:15.770378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:40:15.770393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-05 12:40:15.770407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:40:15.770430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:40:15.770452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:40:15.770467 | orchestrator | 2025-04-05 12:40:15.770481 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-04-05 12:40:15.770495 | orchestrator | Saturday 05 April 2025 12:39:37 +0000 (0:00:02.523) 0:01:08.716 ******** 2025-04-05 12:40:15.770509 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:40:15.770523 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:40:15.770537 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:40:15.770551 | orchestrator | 2025-04-05 12:40:15.770565 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-04-05 12:40:15.770579 | orchestrator | Saturday 05 April 2025 12:39:37 +0000 (0:00:00.200) 0:01:08.917 ******** 2025-04-05 12:40:15.770593 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:40:15.770607 | orchestrator | 2025-04-05 12:40:15.770621 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-04-05 12:40:15.770641 | orchestrator | Saturday 05 April 2025 12:39:39 +0000 (0:00:02.100) 0:01:11.017 ******** 2025-04-05 12:40:15.770655 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:40:15.770669 | orchestrator | 2025-04-05 12:40:15.770683 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-04-05 12:40:15.770697 | orchestrator | Saturday 05 April 2025 12:39:41 +0000 (0:00:02.236) 0:01:13.254 ******** 2025-04-05 12:40:15.770712 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:40:15.770744 | orchestrator | 2025-04-05 12:40:15.770758 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-04-05 12:40:15.770772 | orchestrator | Saturday 05 April 2025 12:39:53 +0000 (0:00:11.567) 0:01:24.821 ******** 2025-04-05 12:40:15.770786 | orchestrator | 2025-04-05 12:40:15.770800 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-04-05 12:40:15.770814 | orchestrator | Saturday 05 April 2025 12:39:53 +0000 (0:00:00.056) 0:01:24.877 ******** 2025-04-05 12:40:15.770829 | orchestrator | 2025-04-05 12:40:15.770843 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-04-05 12:40:15.770857 | orchestrator | Saturday 05 April 2025 12:39:53 +0000 (0:00:00.053) 0:01:24.931 ******** 2025-04-05 12:40:15.770871 | orchestrator | 2025-04-05 12:40:15.770885 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-04-05 12:40:15.770899 | orchestrator | Saturday 05 April 2025 12:39:53 +0000 (0:00:00.191) 0:01:25.122 ******** 2025-04-05 12:40:15.770913 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:40:15.770927 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:40:15.770941 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:40:15.770955 | orchestrator | 2025-04-05 12:40:15.770969 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-04-05 12:40:15.770983 | orchestrator | Saturday 05 April 2025 12:40:04 +0000 (0:00:10.653) 0:01:35.776 ******** 2025-04-05 12:40:15.771005 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:40:15.771019 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:40:15.771033 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:40:15.771047 | orchestrator | 2025-04-05 12:40:15.771061 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:40:15.771076 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-05 12:40:15.771091 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-05 12:40:15.771105 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-05 12:40:15.771119 | orchestrator | 2025-04-05 12:40:15.771133 | orchestrator | 2025-04-05 12:40:15.771147 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:40:15.771161 | orchestrator | Saturday 05 April 2025 12:40:14 +0000 (0:00:10.178) 0:01:45.955 ******** 2025-04-05 12:40:15.771174 | orchestrator | =============================================================================== 2025-04-05 12:40:15.771188 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 11.57s 2025-04-05 12:40:15.771202 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 10.65s 2025-04-05 12:40:15.771216 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 10.45s 2025-04-05 12:40:15.771230 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.18s 2025-04-05 12:40:15.771249 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 5.93s 2025-04-05 12:40:18.805256 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.79s 2025-04-05 12:40:18.805356 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.60s 2025-04-05 12:40:18.805373 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.56s 2025-04-05 12:40:18.805388 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.52s 2025-04-05 12:40:18.805403 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.49s 2025-04-05 12:40:18.805417 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.12s 2025-04-05 12:40:18.805431 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.01s 2025-04-05 12:40:18.805445 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.96s 2025-04-05 12:40:18.805459 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.94s 2025-04-05 12:40:18.805473 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 2.83s 2025-04-05 12:40:18.805488 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.83s 2025-04-05 12:40:18.805502 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.55s 2025-04-05 12:40:18.805516 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.52s 2025-04-05 12:40:18.805530 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.48s 2025-04-05 12:40:18.805544 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.24s 2025-04-05 12:40:18.805558 | orchestrator | 2025-04-05 12:40:15 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:40:18.805573 | orchestrator | 2025-04-05 12:40:15 | INFO  | Task 1828efe6-0f3f-4687-920a-0811f24bb6ae is in state STARTED 2025-04-05 12:40:18.805587 | orchestrator | 2025-04-05 12:40:15 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:18.805618 | orchestrator | 2025-04-05 12:40:18 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:40:18.806717 | orchestrator | 2025-04-05 12:40:18 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:40:18.807395 | orchestrator | 2025-04-05 12:40:18 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:40:18.809399 | orchestrator | 2025-04-05 12:40:18 | INFO  | Task 1828efe6-0f3f-4687-920a-0811f24bb6ae is in state STARTED 2025-04-05 12:40:21.863990 | orchestrator | 2025-04-05 12:40:18 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:21.864124 | orchestrator | 2025-04-05 12:40:21 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:40:21.865221 | orchestrator | 2025-04-05 12:40:21 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:40:21.867231 | orchestrator | 2025-04-05 12:40:21 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:40:21.868718 | orchestrator | 2025-04-05 12:40:21 | INFO  | Task 1828efe6-0f3f-4687-920a-0811f24bb6ae is in state STARTED 2025-04-05 12:40:24.908478 | orchestrator | 2025-04-05 12:40:21 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:24.908594 | orchestrator | 2025-04-05 12:40:24 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:40:24.908971 | orchestrator | 2025-04-05 12:40:24 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:40:24.909761 | orchestrator | 2025-04-05 12:40:24 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:40:24.911996 | orchestrator | 2025-04-05 12:40:24 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:40:24.914241 | orchestrator | 2025-04-05 12:40:24 | INFO  | Task 1828efe6-0f3f-4687-920a-0811f24bb6ae is in state SUCCESS 2025-04-05 12:40:27.949909 | orchestrator | 2025-04-05 12:40:24 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:27.950098 | orchestrator | 2025-04-05 12:40:27 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:40:27.950537 | orchestrator | 2025-04-05 12:40:27 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:40:27.950974 | orchestrator | 2025-04-05 12:40:27 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:40:27.951931 | orchestrator | 2025-04-05 12:40:27 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:40:30.993889 | orchestrator | 2025-04-05 12:40:27 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:30.994067 | orchestrator | 2025-04-05 12:40:30 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:40:30.995899 | orchestrator | 2025-04-05 12:40:30 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:40:30.995932 | orchestrator | 2025-04-05 12:40:30 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:40:30.996617 | orchestrator | 2025-04-05 12:40:30 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:40:30.996837 | orchestrator | 2025-04-05 12:40:30 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:34.036077 | orchestrator | 2025-04-05 12:40:34 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:40:34.036318 | orchestrator | 2025-04-05 12:40:34 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:40:34.036358 | orchestrator | 2025-04-05 12:40:34 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:40:34.037214 | orchestrator | 2025-04-05 12:40:34 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:40:37.100366 | orchestrator | 2025-04-05 12:40:34 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:37.100497 | orchestrator | 2025-04-05 12:40:37 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:40:37.102301 | orchestrator | 2025-04-05 12:40:37 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:40:37.103752 | orchestrator | 2025-04-05 12:40:37 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:40:37.105860 | orchestrator | 2025-04-05 12:40:37 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:40:37.105997 | orchestrator | 2025-04-05 12:40:37 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:40.155357 | orchestrator | 2025-04-05 12:40:40 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:40:40.155919 | orchestrator | 2025-04-05 12:40:40 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:40:40.157160 | orchestrator | 2025-04-05 12:40:40 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:40:40.161830 | orchestrator | 2025-04-05 12:40:40 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:40:43.199654 | orchestrator | 2025-04-05 12:40:40 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:43.199842 | orchestrator | 2025-04-05 12:40:43 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:40:43.200424 | orchestrator | 2025-04-05 12:40:43 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:40:43.200475 | orchestrator | 2025-04-05 12:40:43 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:40:43.201236 | orchestrator | 2025-04-05 12:40:43 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:40:46.231202 | orchestrator | 2025-04-05 12:40:43 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:46.231335 | orchestrator | 2025-04-05 12:40:46 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:40:49.266814 | orchestrator | 2025-04-05 12:40:46 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:40:49.266930 | orchestrator | 2025-04-05 12:40:46 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:40:49.266948 | orchestrator | 2025-04-05 12:40:46 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:40:49.266964 | orchestrator | 2025-04-05 12:40:46 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:49.266995 | orchestrator | 2025-04-05 12:40:49 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:40:49.267570 | orchestrator | 2025-04-05 12:40:49 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:40:49.268319 | orchestrator | 2025-04-05 12:40:49 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:40:49.268350 | orchestrator | 2025-04-05 12:40:49 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:40:52.303460 | orchestrator | 2025-04-05 12:40:49 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:52.303585 | orchestrator | 2025-04-05 12:40:52 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:40:52.304621 | orchestrator | 2025-04-05 12:40:52 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:40:52.305654 | orchestrator | 2025-04-05 12:40:52 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:40:52.306990 | orchestrator | 2025-04-05 12:40:52 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:40:55.340038 | orchestrator | 2025-04-05 12:40:52 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:55.340246 | orchestrator | 2025-04-05 12:40:55 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:40:55.342552 | orchestrator | 2025-04-05 12:40:55 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:40:55.342587 | orchestrator | 2025-04-05 12:40:55 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:40:55.343063 | orchestrator | 2025-04-05 12:40:55 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:40:58.374288 | orchestrator | 2025-04-05 12:40:55 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:40:58.374413 | orchestrator | 2025-04-05 12:40:58 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:41:01.419039 | orchestrator | 2025-04-05 12:40:58 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:01.419159 | orchestrator | 2025-04-05 12:40:58 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:01.419178 | orchestrator | 2025-04-05 12:40:58 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:01.419193 | orchestrator | 2025-04-05 12:40:58 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:01.419226 | orchestrator | 2025-04-05 12:41:01 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:41:01.421979 | orchestrator | 2025-04-05 12:41:01 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:01.422446 | orchestrator | 2025-04-05 12:41:01 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:01.423805 | orchestrator | 2025-04-05 12:41:01 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:04.467703 | orchestrator | 2025-04-05 12:41:01 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:04.467894 | orchestrator | 2025-04-05 12:41:04 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:41:04.468271 | orchestrator | 2025-04-05 12:41:04 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:04.468308 | orchestrator | 2025-04-05 12:41:04 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:04.468963 | orchestrator | 2025-04-05 12:41:04 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:07.526870 | orchestrator | 2025-04-05 12:41:04 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:07.527002 | orchestrator | 2025-04-05 12:41:07 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:41:07.529637 | orchestrator | 2025-04-05 12:41:07 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:07.533844 | orchestrator | 2025-04-05 12:41:07 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:07.535116 | orchestrator | 2025-04-05 12:41:07 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:10.572493 | orchestrator | 2025-04-05 12:41:07 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:10.572623 | orchestrator | 2025-04-05 12:41:10 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:41:10.573197 | orchestrator | 2025-04-05 12:41:10 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:10.573228 | orchestrator | 2025-04-05 12:41:10 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:10.573254 | orchestrator | 2025-04-05 12:41:10 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:13.608236 | orchestrator | 2025-04-05 12:41:10 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:13.608361 | orchestrator | 2025-04-05 12:41:13 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:41:13.608605 | orchestrator | 2025-04-05 12:41:13 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:13.608638 | orchestrator | 2025-04-05 12:41:13 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:13.609080 | orchestrator | 2025-04-05 12:41:13 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:16.638015 | orchestrator | 2025-04-05 12:41:13 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:16.638195 | orchestrator | 2025-04-05 12:41:16 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:41:16.638533 | orchestrator | 2025-04-05 12:41:16 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:16.638569 | orchestrator | 2025-04-05 12:41:16 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:16.639490 | orchestrator | 2025-04-05 12:41:16 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:19.666835 | orchestrator | 2025-04-05 12:41:16 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:19.666962 | orchestrator | 2025-04-05 12:41:19 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:41:19.667226 | orchestrator | 2025-04-05 12:41:19 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:19.667258 | orchestrator | 2025-04-05 12:41:19 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:19.667655 | orchestrator | 2025-04-05 12:41:19 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:22.688781 | orchestrator | 2025-04-05 12:41:19 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:22.688919 | orchestrator | 2025-04-05 12:41:22 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:41:22.691152 | orchestrator | 2025-04-05 12:41:22 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:22.691572 | orchestrator | 2025-04-05 12:41:22 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:22.692103 | orchestrator | 2025-04-05 12:41:22 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:25.728119 | orchestrator | 2025-04-05 12:41:22 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:25.728250 | orchestrator | 2025-04-05 12:41:25 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:41:25.729323 | orchestrator | 2025-04-05 12:41:25 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:25.729359 | orchestrator | 2025-04-05 12:41:25 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:25.731368 | orchestrator | 2025-04-05 12:41:25 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:28.763723 | orchestrator | 2025-04-05 12:41:25 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:28.763922 | orchestrator | 2025-04-05 12:41:28 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:41:28.765001 | orchestrator | 2025-04-05 12:41:28 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:28.765038 | orchestrator | 2025-04-05 12:41:28 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:28.766067 | orchestrator | 2025-04-05 12:41:28 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:28.766187 | orchestrator | 2025-04-05 12:41:28 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:31.801071 | orchestrator | 2025-04-05 12:41:31 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:41:31.801633 | orchestrator | 2025-04-05 12:41:31 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:31.801670 | orchestrator | 2025-04-05 12:41:31 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:31.802179 | orchestrator | 2025-04-05 12:41:31 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:31.802269 | orchestrator | 2025-04-05 12:41:31 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:34.820962 | orchestrator | 2025-04-05 12:41:34 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state STARTED 2025-04-05 12:41:34.822568 | orchestrator | 2025-04-05 12:41:34 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:34.822616 | orchestrator | 2025-04-05 12:41:34 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:34.827075 | orchestrator | 2025-04-05 12:41:34 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:37.858550 | orchestrator | 2025-04-05 12:41:34 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:37.858774 | orchestrator | 2025-04-05 12:41:37 | INFO  | Task ff3d1282-872b-4ce4-b053-6b9a3f7add0b is in state SUCCESS 2025-04-05 12:41:37.858794 | orchestrator | 2025-04-05 12:41:37.858808 | orchestrator | 2025-04-05 12:41:37.858837 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:41:37.858851 | orchestrator | 2025-04-05 12:41:37.858863 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:41:37.858876 | orchestrator | Saturday 05 April 2025 12:39:54 +0000 (0:00:00.398) 0:00:00.398 ******** 2025-04-05 12:41:37.858889 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:41:37.858902 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:41:37.858915 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:41:37.858927 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:41:37.858940 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:41:37.858952 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:41:37.858965 | orchestrator | ok: [testbed-manager] 2025-04-05 12:41:37.858977 | orchestrator | 2025-04-05 12:41:37.858989 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:41:37.859002 | orchestrator | Saturday 05 April 2025 12:39:55 +0000 (0:00:01.062) 0:00:01.460 ******** 2025-04-05 12:41:37.859014 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-04-05 12:41:37.859027 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-04-05 12:41:37.859039 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-04-05 12:41:37.859052 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-04-05 12:41:37.859064 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-04-05 12:41:37.859076 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-04-05 12:41:37.859109 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-04-05 12:41:37.859122 | orchestrator | 2025-04-05 12:41:37.859135 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-04-05 12:41:37.859147 | orchestrator | 2025-04-05 12:41:37.859159 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-04-05 12:41:37.859298 | orchestrator | Saturday 05 April 2025 12:39:56 +0000 (0:00:00.974) 0:00:02.435 ******** 2025-04-05 12:41:37.859316 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-04-05 12:41:37.859331 | orchestrator | 2025-04-05 12:41:37.859343 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-04-05 12:41:37.859356 | orchestrator | Saturday 05 April 2025 12:39:57 +0000 (0:00:01.482) 0:00:03.917 ******** 2025-04-05 12:41:37.859368 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-04-05 12:41:37.859380 | orchestrator | 2025-04-05 12:41:37.859393 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-04-05 12:41:37.859405 | orchestrator | Saturday 05 April 2025 12:40:00 +0000 (0:00:03.041) 0:00:06.959 ******** 2025-04-05 12:41:37.859419 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-04-05 12:41:37.859439 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-04-05 12:41:37.859452 | orchestrator | 2025-04-05 12:41:37.859465 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-04-05 12:41:37.859477 | orchestrator | Saturday 05 April 2025 12:40:06 +0000 (0:00:05.992) 0:00:12.951 ******** 2025-04-05 12:41:37.859490 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-05 12:41:37.859503 | orchestrator | 2025-04-05 12:41:37.859515 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-04-05 12:41:37.859527 | orchestrator | Saturday 05 April 2025 12:40:09 +0000 (0:00:02.776) 0:00:15.728 ******** 2025-04-05 12:41:37.859540 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-05 12:41:37.859552 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-04-05 12:41:37.859564 | orchestrator | 2025-04-05 12:41:37.859577 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-04-05 12:41:37.859589 | orchestrator | Saturday 05 April 2025 12:40:12 +0000 (0:00:03.350) 0:00:19.079 ******** 2025-04-05 12:41:37.859601 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-05 12:41:37.859614 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-04-05 12:41:37.859627 | orchestrator | 2025-04-05 12:41:37.859640 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-04-05 12:41:37.859652 | orchestrator | Saturday 05 April 2025 12:40:18 +0000 (0:00:05.436) 0:00:24.515 ******** 2025-04-05 12:41:37.859664 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-04-05 12:41:37.859677 | orchestrator | 2025-04-05 12:41:37.859689 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:41:37.859701 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:41:37.859714 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:41:37.859727 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:41:37.859766 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:41:37.859780 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:41:37.859813 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:41:37.859827 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:41:37.859840 | orchestrator | 2025-04-05 12:41:37.859853 | orchestrator | 2025-04-05 12:41:37.859865 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:41:37.859878 | orchestrator | Saturday 05 April 2025 12:40:22 +0000 (0:00:03.906) 0:00:28.421 ******** 2025-04-05 12:41:37.859890 | orchestrator | =============================================================================== 2025-04-05 12:41:37.859902 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.99s 2025-04-05 12:41:37.859914 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.44s 2025-04-05 12:41:37.859932 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 3.91s 2025-04-05 12:41:37.859947 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.35s 2025-04-05 12:41:37.859960 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.04s 2025-04-05 12:41:37.859974 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.78s 2025-04-05 12:41:37.859988 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.48s 2025-04-05 12:41:37.860001 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.06s 2025-04-05 12:41:37.860015 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.97s 2025-04-05 12:41:37.860029 | orchestrator | 2025-04-05 12:41:37.860042 | orchestrator | 2025-04-05 12:41:37.860056 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-04-05 12:41:37.860070 | orchestrator | 2025-04-05 12:41:37.860083 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-04-05 12:41:37.860097 | orchestrator | Saturday 05 April 2025 12:35:36 +0000 (0:00:00.144) 0:00:00.144 ******** 2025-04-05 12:41:37.860112 | orchestrator | changed: [localhost] 2025-04-05 12:41:37.860126 | orchestrator | 2025-04-05 12:41:37.860140 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-04-05 12:41:37.860153 | orchestrator | Saturday 05 April 2025 12:35:36 +0000 (0:00:00.601) 0:00:00.745 ******** 2025-04-05 12:41:37.860167 | orchestrator | 2025-04-05 12:41:37.860181 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-05 12:41:37.860195 | orchestrator | 2025-04-05 12:41:37.860208 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-05 12:41:37.860222 | orchestrator | 2025-04-05 12:41:37.860236 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-05 12:41:37.860250 | orchestrator | 2025-04-05 12:41:37.860264 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-05 12:41:37.860278 | orchestrator | 2025-04-05 12:41:37.860291 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-05 12:41:37.860304 | orchestrator | 2025-04-05 12:41:37.860316 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-05 12:41:37.860328 | orchestrator | 2025-04-05 12:41:37.860340 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-05 12:41:37.860353 | orchestrator | changed: [localhost] 2025-04-05 12:41:37.860365 | orchestrator | 2025-04-05 12:41:37.860377 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-04-05 12:41:37.860390 | orchestrator | Saturday 05 April 2025 12:41:21 +0000 (0:05:45.054) 0:05:45.799 ******** 2025-04-05 12:41:37.860402 | orchestrator | changed: [localhost] 2025-04-05 12:41:37.860414 | orchestrator | 2025-04-05 12:41:37.860427 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:41:37.860439 | orchestrator | 2025-04-05 12:41:37.860451 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:41:37.860471 | orchestrator | Saturday 05 April 2025 12:41:34 +0000 (0:00:12.821) 0:05:58.621 ******** 2025-04-05 12:41:37.860483 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:41:37.860495 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:41:37.860508 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:41:37.860520 | orchestrator | 2025-04-05 12:41:37.860533 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:41:37.860545 | orchestrator | Saturday 05 April 2025 12:41:35 +0000 (0:00:00.855) 0:05:59.476 ******** 2025-04-05 12:41:37.860557 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-04-05 12:41:37.860570 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-04-05 12:41:37.860582 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-04-05 12:41:37.860594 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-04-05 12:41:37.860607 | orchestrator | 2025-04-05 12:41:37.860619 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-04-05 12:41:37.860631 | orchestrator | skipping: no hosts matched 2025-04-05 12:41:37.860649 | orchestrator | 2025-04-05 12:41:37.860661 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:41:37.860674 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:41:37.860687 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:41:37.860699 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:41:37.860712 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:41:37.860725 | orchestrator | 2025-04-05 12:41:37.860754 | orchestrator | 2025-04-05 12:41:37.860774 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:41:40.881967 | orchestrator | Saturday 05 April 2025 12:41:37 +0000 (0:00:01.578) 0:06:01.054 ******** 2025-04-05 12:41:40.882231 | orchestrator | =============================================================================== 2025-04-05 12:41:40.882271 | orchestrator | Download ironic-agent initramfs --------------------------------------- 345.05s 2025-04-05 12:41:40.882287 | orchestrator | Download ironic-agent kernel ------------------------------------------- 12.82s 2025-04-05 12:41:40.882302 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.58s 2025-04-05 12:41:40.882316 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.86s 2025-04-05 12:41:40.882330 | orchestrator | Ensure the destination directory exists --------------------------------- 0.60s 2025-04-05 12:41:40.882345 | orchestrator | 2025-04-05 12:41:37 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:40.882360 | orchestrator | 2025-04-05 12:41:37 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:40.882374 | orchestrator | 2025-04-05 12:41:37 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:40.882388 | orchestrator | 2025-04-05 12:41:37 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:40.882420 | orchestrator | 2025-04-05 12:41:40 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:40.882639 | orchestrator | 2025-04-05 12:41:40 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:40.882672 | orchestrator | 2025-04-05 12:41:40 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:41:40.883385 | orchestrator | 2025-04-05 12:41:40 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:43.906656 | orchestrator | 2025-04-05 12:41:40 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:43.906912 | orchestrator | 2025-04-05 12:41:43 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:43.910104 | orchestrator | 2025-04-05 12:41:43 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:43.910134 | orchestrator | 2025-04-05 12:41:43 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:41:43.915994 | orchestrator | 2025-04-05 12:41:43 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:43.917676 | orchestrator | 2025-04-05 12:41:43 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:46.940680 | orchestrator | 2025-04-05 12:41:46 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:46.941045 | orchestrator | 2025-04-05 12:41:46 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:46.942835 | orchestrator | 2025-04-05 12:41:46 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:41:46.943246 | orchestrator | 2025-04-05 12:41:46 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:46.943363 | orchestrator | 2025-04-05 12:41:46 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:49.963868 | orchestrator | 2025-04-05 12:41:49 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:49.964093 | orchestrator | 2025-04-05 12:41:49 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:49.964611 | orchestrator | 2025-04-05 12:41:49 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:41:49.965375 | orchestrator | 2025-04-05 12:41:49 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:52.991017 | orchestrator | 2025-04-05 12:41:49 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:52.991151 | orchestrator | 2025-04-05 12:41:52 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:52.992535 | orchestrator | 2025-04-05 12:41:52 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:52.992588 | orchestrator | 2025-04-05 12:41:52 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:41:52.994436 | orchestrator | 2025-04-05 12:41:52 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:56.055913 | orchestrator | 2025-04-05 12:41:52 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:56.056051 | orchestrator | 2025-04-05 12:41:56 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:59.094682 | orchestrator | 2025-04-05 12:41:56 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:59.094857 | orchestrator | 2025-04-05 12:41:56 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:41:59.094878 | orchestrator | 2025-04-05 12:41:56 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:41:59.094895 | orchestrator | 2025-04-05 12:41:56 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:41:59.094929 | orchestrator | 2025-04-05 12:41:59 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:41:59.096846 | orchestrator | 2025-04-05 12:41:59 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:41:59.097132 | orchestrator | 2025-04-05 12:41:59 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:41:59.098175 | orchestrator | 2025-04-05 12:41:59 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:02.125025 | orchestrator | 2025-04-05 12:41:59 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:02.125177 | orchestrator | 2025-04-05 12:42:02 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:02.125704 | orchestrator | 2025-04-05 12:42:02 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:42:02.125768 | orchestrator | 2025-04-05 12:42:02 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:02.127339 | orchestrator | 2025-04-05 12:42:02 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:05.151312 | orchestrator | 2025-04-05 12:42:02 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:05.151445 | orchestrator | 2025-04-05 12:42:05 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:05.151896 | orchestrator | 2025-04-05 12:42:05 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:42:05.151930 | orchestrator | 2025-04-05 12:42:05 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:05.152219 | orchestrator | 2025-04-05 12:42:05 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:08.179363 | orchestrator | 2025-04-05 12:42:05 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:08.179509 | orchestrator | 2025-04-05 12:42:08 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:08.185107 | orchestrator | 2025-04-05 12:42:08 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:42:08.185619 | orchestrator | 2025-04-05 12:42:08 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:08.185667 | orchestrator | 2025-04-05 12:42:08 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:11.230300 | orchestrator | 2025-04-05 12:42:08 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:11.230440 | orchestrator | 2025-04-05 12:42:11 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:11.231417 | orchestrator | 2025-04-05 12:42:11 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:42:11.235263 | orchestrator | 2025-04-05 12:42:11 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:11.236972 | orchestrator | 2025-04-05 12:42:11 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:14.266739 | orchestrator | 2025-04-05 12:42:11 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:14.266909 | orchestrator | 2025-04-05 12:42:14 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:14.269347 | orchestrator | 2025-04-05 12:42:14 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:42:14.271199 | orchestrator | 2025-04-05 12:42:14 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:14.273290 | orchestrator | 2025-04-05 12:42:14 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:14.273836 | orchestrator | 2025-04-05 12:42:14 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:17.316052 | orchestrator | 2025-04-05 12:42:17 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:17.317111 | orchestrator | 2025-04-05 12:42:17 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:42:17.318943 | orchestrator | 2025-04-05 12:42:17 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:17.320492 | orchestrator | 2025-04-05 12:42:17 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:17.320877 | orchestrator | 2025-04-05 12:42:17 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:20.368450 | orchestrator | 2025-04-05 12:42:20 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:20.368896 | orchestrator | 2025-04-05 12:42:20 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:42:20.370356 | orchestrator | 2025-04-05 12:42:20 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:20.370951 | orchestrator | 2025-04-05 12:42:20 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:20.371090 | orchestrator | 2025-04-05 12:42:20 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:23.415429 | orchestrator | 2025-04-05 12:42:23 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:23.416176 | orchestrator | 2025-04-05 12:42:23 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:42:23.418134 | orchestrator | 2025-04-05 12:42:23 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:26.458716 | orchestrator | 2025-04-05 12:42:23 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:26.458884 | orchestrator | 2025-04-05 12:42:23 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:26.458921 | orchestrator | 2025-04-05 12:42:26 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:26.461302 | orchestrator | 2025-04-05 12:42:26 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:42:29.500017 | orchestrator | 2025-04-05 12:42:26 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:29.500113 | orchestrator | 2025-04-05 12:42:26 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:29.500130 | orchestrator | 2025-04-05 12:42:26 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:29.500161 | orchestrator | 2025-04-05 12:42:29 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:29.502191 | orchestrator | 2025-04-05 12:42:29 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:42:29.502837 | orchestrator | 2025-04-05 12:42:29 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:29.505904 | orchestrator | 2025-04-05 12:42:29 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:32.545378 | orchestrator | 2025-04-05 12:42:29 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:32.545508 | orchestrator | 2025-04-05 12:42:32 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:32.547317 | orchestrator | 2025-04-05 12:42:32 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:42:32.549222 | orchestrator | 2025-04-05 12:42:32 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:32.550982 | orchestrator | 2025-04-05 12:42:32 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:32.551358 | orchestrator | 2025-04-05 12:42:32 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:35.601184 | orchestrator | 2025-04-05 12:42:35 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:35.601541 | orchestrator | 2025-04-05 12:42:35 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:42:35.602725 | orchestrator | 2025-04-05 12:42:35 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:35.604075 | orchestrator | 2025-04-05 12:42:35 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:35.604301 | orchestrator | 2025-04-05 12:42:35 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:38.650821 | orchestrator | 2025-04-05 12:42:38 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:38.652596 | orchestrator | 2025-04-05 12:42:38 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:42:38.653884 | orchestrator | 2025-04-05 12:42:38 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:38.655493 | orchestrator | 2025-04-05 12:42:38 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:38.655866 | orchestrator | 2025-04-05 12:42:38 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:41.705553 | orchestrator | 2025-04-05 12:42:41 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:41.706128 | orchestrator | 2025-04-05 12:42:41 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:42:41.706929 | orchestrator | 2025-04-05 12:42:41 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:41.710073 | orchestrator | 2025-04-05 12:42:41 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:44.762389 | orchestrator | 2025-04-05 12:42:41 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:44.762530 | orchestrator | 2025-04-05 12:42:44 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:44.763012 | orchestrator | 2025-04-05 12:42:44 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:42:44.764340 | orchestrator | 2025-04-05 12:42:44 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:44.765158 | orchestrator | 2025-04-05 12:42:44 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:44.765619 | orchestrator | 2025-04-05 12:42:44 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:47.811606 | orchestrator | 2025-04-05 12:42:47 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:47.814489 | orchestrator | 2025-04-05 12:42:47 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:42:47.815260 | orchestrator | 2025-04-05 12:42:47 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:47.815307 | orchestrator | 2025-04-05 12:42:47 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:50.850937 | orchestrator | 2025-04-05 12:42:47 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:50.851153 | orchestrator | 2025-04-05 12:42:50 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:50.851788 | orchestrator | 2025-04-05 12:42:50 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:42:50.851824 | orchestrator | 2025-04-05 12:42:50 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:50.852346 | orchestrator | 2025-04-05 12:42:50 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:53.883323 | orchestrator | 2025-04-05 12:42:50 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:53.883459 | orchestrator | 2025-04-05 12:42:53 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:53.883738 | orchestrator | 2025-04-05 12:42:53 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state STARTED 2025-04-05 12:42:53.885995 | orchestrator | 2025-04-05 12:42:53 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:53.888988 | orchestrator | 2025-04-05 12:42:53 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:53.889174 | orchestrator | 2025-04-05 12:42:53 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:56.931882 | orchestrator | 2025-04-05 12:42:56 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:56.935387 | orchestrator | 2025-04-05 12:42:56 | INFO  | Task 967bc9db-064b-4e35-a7da-13f667500d3c is in state SUCCESS 2025-04-05 12:42:56.937042 | orchestrator | 2025-04-05 12:42:56.937084 | orchestrator | 2025-04-05 12:42:56.937100 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:42:56.937115 | orchestrator | 2025-04-05 12:42:56.937129 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:42:56.937144 | orchestrator | Saturday 05 April 2025 12:38:37 +0000 (0:00:00.226) 0:00:00.226 ******** 2025-04-05 12:42:56.937158 | orchestrator | ok: [testbed-manager] 2025-04-05 12:42:56.937173 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:42:56.937187 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:42:56.937202 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:42:56.937216 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:42:56.937230 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:42:56.937244 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:42:56.937258 | orchestrator | 2025-04-05 12:42:56.937273 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:42:56.937287 | orchestrator | Saturday 05 April 2025 12:38:37 +0000 (0:00:00.827) 0:00:01.053 ******** 2025-04-05 12:42:56.937302 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-04-05 12:42:56.937317 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-04-05 12:42:56.937331 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-04-05 12:42:56.937345 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-04-05 12:42:56.937360 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-04-05 12:42:56.937721 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-04-05 12:42:56.937738 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-04-05 12:42:56.937752 | orchestrator | 2025-04-05 12:42:56.937802 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-04-05 12:42:56.937818 | orchestrator | 2025-04-05 12:42:56.937832 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-04-05 12:42:56.937846 | orchestrator | Saturday 05 April 2025 12:38:38 +0000 (0:00:00.934) 0:00:01.988 ******** 2025-04-05 12:42:56.937862 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:42:56.937878 | orchestrator | 2025-04-05 12:42:56.937892 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-04-05 12:42:56.937906 | orchestrator | Saturday 05 April 2025 12:38:40 +0000 (0:00:01.401) 0:00:03.390 ******** 2025-04-05 12:42:56.937923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.937968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.938079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.938113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.938129 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-05 12:42:56.938267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.938289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.938318 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.938334 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.938351 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.938399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.938417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.938434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.938947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.939196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.939216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.939233 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.939281 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.939307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.939341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.939368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.939382 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.939397 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.939412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.939427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.939835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.939886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.939914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.939930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.939946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.940048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.940070 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.940142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.940161 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-05 12:42:56.940192 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.940207 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.940250 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.940267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.940290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.940306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.940339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.940354 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.940391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.940413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.940427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.940440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.940807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.940846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.940861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.940941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.940972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.940985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.940998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.941011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.941037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.941051 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.941290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.941320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.941333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.941346 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.941374 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.941388 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.941401 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.941477 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.941827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.941847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.941860 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.941891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.941903 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.941914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.941925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.942230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.942269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.942281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.942292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.942303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.942313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.942332 | orchestrator | 2025-04-05 12:42:56.942343 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-04-05 12:42:56.942353 | orchestrator | Saturday 05 April 2025 12:38:43 +0000 (0:00:03.717) 0:00:07.108 ******** 2025-04-05 12:42:56.942364 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:42:56.942375 | orchestrator | 2025-04-05 12:42:56.942385 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-04-05 12:42:56.942454 | orchestrator | Saturday 05 April 2025 12:38:45 +0000 (0:00:01.946) 0:00:09.055 ******** 2025-04-05 12:42:56.942469 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-05 12:42:56.942480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.942491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.942514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.942526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.942536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.942553 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.942617 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.942633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.942644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.942655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.942666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.942687 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.942699 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.942718 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.942804 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.942821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.942832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.942842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.942853 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.942876 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-05 12:42:56.942912 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.942981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.942998 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.943009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.943021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.943032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.943056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.943086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.943098 | orchestrator | 2025-04-05 12:42:56.943109 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-04-05 12:42:56.943120 | orchestrator | Saturday 05 April 2025 12:38:51 +0000 (0:00:05.683) 0:00:14.739 ******** 2025-04-05 12:42:56.943185 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.943201 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:42:56.943213 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.943236 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.943247 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.943265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:42:56.943280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.943342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.943359 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:42:56.943371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.943382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.943394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:42:56.943405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.943435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.943447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.943469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.943481 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:56.943492 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:56.943557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:42:56.943572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.943584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.943595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.943613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.943624 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:56.943646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:42:56.943658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.943669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.943730 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:56.943746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:42:56.943758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.943786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.943797 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:56.943813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:42:56.943824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.943844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.943867 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:56.943878 | orchestrator | 2025-04-05 12:42:56.943888 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-04-05 12:42:56.943899 | orchestrator | Saturday 05 April 2025 12:38:53 +0000 (0:00:01.630) 0:00:16.369 ******** 2025-04-05 12:42:56.943963 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.943979 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:42:56.943990 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.944002 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.944034 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.944045 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:42:56.944056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:42:56.944067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.944133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.944149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.944161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.944178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:42:56.944190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.944201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.944212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.944223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.944246 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:56.944256 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:56.944334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:42:56.944351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:42:56.944362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.944378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.944389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.944400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.944410 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:56.944421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.944431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.944442 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:56.944513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:42:56.944529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.944546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.944557 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:56.944567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-05 12:42:56.944590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.944601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.944611 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:56.944621 | orchestrator | 2025-04-05 12:42:56.944632 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-04-05 12:42:56.944642 | orchestrator | Saturday 05 April 2025 12:38:55 +0000 (0:00:02.206) 0:00:18.576 ******** 2025-04-05 12:42:56.944662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.944726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.944747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.944758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.944818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.944841 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-05 12:42:56.944852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.944920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.944942 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.944954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.944965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.944988 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.944999 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945011 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.945114 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.945133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945154 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.945165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945197 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.945264 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945290 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.945301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.945311 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.945322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.945333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.945413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.945427 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945445 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.945455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.945474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.945532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.945564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.945573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.945592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.945626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945646 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-05 12:42:56.945656 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.945665 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.945728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.945739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.945748 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.945757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.945806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.945837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.945848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.945857 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.945866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945882 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.945896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945905 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.945933 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.945944 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.945962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.945981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.945990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.946005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.946058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.946070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.946080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.946089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.946107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.946122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.946131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.946140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.946170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.946181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.946191 | orchestrator | 2025-04-05 12:42:56.946201 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-04-05 12:42:56.946211 | orchestrator | Saturday 05 April 2025 12:39:01 +0000 (0:00:06.215) 0:00:24.791 ******** 2025-04-05 12:42:56.946221 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-05 12:42:56.946230 | orchestrator | 2025-04-05 12:42:56.946240 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-04-05 12:42:56.946249 | orchestrator | Saturday 05 April 2025 12:39:02 +0000 (0:00:00.819) 0:00:25.611 ******** 2025-04-05 12:42:56.946259 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330661, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1655142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946283 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330661, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1655142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946294 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330661, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1655142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946304 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330661, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1655142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946334 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1330653, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1625142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946345 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330661, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1655142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946355 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330661, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1655142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946365 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1330653, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1625142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946388 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330661, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1655142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:42:56.946399 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1330653, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1625142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946412 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1330653, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1625142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946441 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1330626, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946452 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1330653, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1625142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946463 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1330626, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946473 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1330626, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946500 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1330653, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1625142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946511 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1330626, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946521 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1330628, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946549 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1330626, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946560 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1330628, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946569 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1330628, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946583 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1330628, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946600 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1330626, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946609 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1330628, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946618 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1330628, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946647 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1330647, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1615143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946657 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1330647, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1615143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946666 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1330647, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1615143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946687 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1330653, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1625142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:42:56.946696 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1330647, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1615143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946705 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1330647, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1615143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946714 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1330647, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1615143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946743 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1330635, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.158514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946753 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1330635, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.158514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946784 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1330635, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.158514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946803 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1330635, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.158514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946812 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1330635, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.158514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946821 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1330635, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.158514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946830 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1330645, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.160514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946859 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1330645, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.160514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946869 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1330645, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.160514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946886 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1330645, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.160514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946900 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1330645, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.160514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946909 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1330645, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.160514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946918 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1330655, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1625142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946927 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1330655, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1625142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946936 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1330626, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:42:56.946972 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1330655, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1625142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946988 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1330655, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1625142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.946998 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1330655, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1625142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947007 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1330660, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1645143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947016 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1330660, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1645143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947025 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1330655, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1625142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947040 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1330660, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1645143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947069 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1330660, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1645143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947085 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1330660, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1645143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947095 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1330672, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1685143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947104 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1330660, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1645143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947113 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1330672, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1685143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947129 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1330672, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1685143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947138 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1330672, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1685143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947167 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1330656, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1635141, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947183 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1330672, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1685143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947192 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1330628, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:42:56.947201 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330632, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.157514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947210 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1330656, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1635141, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947226 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1330672, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1685143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947236 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1330656, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1635141, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947263 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1330656, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1635141, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947278 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1330656, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1635141, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947287 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330632, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.157514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947296 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1330643, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1595142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947313 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330632, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.157514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947322 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1330656, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1635141, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947331 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330632, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.157514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947365 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330632, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.157514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947375 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330623, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947384 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1330643, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1595142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947393 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1330643, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1595142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947409 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330632, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.157514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947419 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1330643, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1595142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947428 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1330647, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1615143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:42:56.947461 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1330650, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1615143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947471 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1330643, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1595142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947481 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330623, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947499 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330623, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947509 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1330643, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1595142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947518 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1330671, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1685143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947527 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330623, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947560 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330623, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947571 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1330650, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1615143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947580 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330623, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947597 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1330650, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1615143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947606 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1330641, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1595142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947615 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1330650, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1615143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947629 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1330650, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1615143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947657 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1330671, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1685143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947668 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1330635, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.158514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:42:56.947685 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1330671, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1685143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947694 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1330650, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1615143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947703 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1330671, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1685143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947712 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1330671, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1685143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947729 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1330662, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1655142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947739 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:56.947807 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1330671, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1685143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947821 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1330641, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1595142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947840 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1330641, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1595142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947850 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1330641, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1595142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947860 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1330641, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1595142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947870 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1330641, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1595142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947885 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1330662, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1655142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947894 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:56.947923 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1330662, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1655142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947934 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:56.947949 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1330662, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1655142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947958 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:56.947967 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1330662, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1655142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947976 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:56.947985 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1330662, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1655142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-05 12:42:56.947993 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:56.948002 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1330645, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.160514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:42:56.948016 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1330655, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1625142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:42:56.948025 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1330660, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1645143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:42:56.948051 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1330672, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1685143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:42:56.948068 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1330656, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1635141, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:42:56.948077 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330632, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.157514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:42:56.948086 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1330643, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1595142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:42:56.948095 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1330623, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1565142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:42:56.948109 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1330650, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1615143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:42:56.948117 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1330671, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1685143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:42:56.948151 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1330641, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1595142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:42:56.948162 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1330662, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1655142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-05 12:42:56.948171 | orchestrator | 2025-04-05 12:42:56.948180 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-04-05 12:42:56.948189 | orchestrator | Saturday 05 April 2025 12:40:02 +0000 (0:01:00.384) 0:01:25.996 ******** 2025-04-05 12:42:56.948198 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-05 12:42:56.948207 | orchestrator | 2025-04-05 12:42:56.948215 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-04-05 12:42:56.948227 | orchestrator | Saturday 05 April 2025 12:40:03 +0000 (0:00:00.400) 0:01:26.397 ******** 2025-04-05 12:42:56.948236 | orchestrator | [WARNING]: Skipped 2025-04-05 12:42:56.948245 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-05 12:42:56.948254 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-04-05 12:42:56.948263 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-05 12:42:56.948271 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-04-05 12:42:56.948280 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-05 12:42:56.948294 | orchestrator | [WARNING]: Skipped 2025-04-05 12:42:56.948303 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-05 12:42:56.948311 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-04-05 12:42:56.948320 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-05 12:42:56.948328 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-04-05 12:42:56.948337 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-05 12:42:56.948346 | orchestrator | [WARNING]: Skipped 2025-04-05 12:42:56.948355 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-05 12:42:56.948364 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-04-05 12:42:56.948372 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-05 12:42:56.948381 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-04-05 12:42:56.948390 | orchestrator | [WARNING]: Skipped 2025-04-05 12:42:56.948398 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-05 12:42:56.948407 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-04-05 12:42:56.948416 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-05 12:42:56.948424 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-04-05 12:42:56.948433 | orchestrator | [WARNING]: Skipped 2025-04-05 12:42:56.948441 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-05 12:42:56.948450 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-04-05 12:42:56.948459 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-05 12:42:56.948467 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-04-05 12:42:56.948476 | orchestrator | [WARNING]: Skipped 2025-04-05 12:42:56.948485 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-05 12:42:56.948494 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-04-05 12:42:56.948502 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-05 12:42:56.948511 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-04-05 12:42:56.948520 | orchestrator | [WARNING]: Skipped 2025-04-05 12:42:56.948528 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-05 12:42:56.948537 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-04-05 12:42:56.948546 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-05 12:42:56.948554 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-04-05 12:42:56.948563 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-05 12:42:56.948572 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-05 12:42:56.948580 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-05 12:42:56.948589 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-05 12:42:56.948598 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-05 12:42:56.948606 | orchestrator | 2025-04-05 12:42:56.948618 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-04-05 12:42:56.948627 | orchestrator | Saturday 05 April 2025 12:40:04 +0000 (0:00:01.336) 0:01:27.733 ******** 2025-04-05 12:42:56.948636 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-05 12:42:56.948645 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:56.948654 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-05 12:42:56.948662 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:56.948671 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-05 12:42:56.948680 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:56.948695 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-05 12:42:56.948704 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:56.948712 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-05 12:42:56.948721 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:56.948729 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-05 12:42:56.948738 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:56.948747 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-04-05 12:42:56.948755 | orchestrator | 2025-04-05 12:42:56.948775 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-04-05 12:42:56.948784 | orchestrator | Saturday 05 April 2025 12:40:17 +0000 (0:00:12.940) 0:01:40.674 ******** 2025-04-05 12:42:56.948792 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-05 12:42:56.948800 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:56.948808 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-05 12:42:56.948816 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:56.948824 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-05 12:42:56.948831 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:56.948840 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-05 12:42:56.948847 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-05 12:42:56.948855 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:56.948863 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:56.948871 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-05 12:42:56.948879 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:56.948887 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-04-05 12:42:56.948895 | orchestrator | 2025-04-05 12:42:56.948903 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-04-05 12:42:56.948911 | orchestrator | Saturday 05 April 2025 12:40:21 +0000 (0:00:04.349) 0:01:45.023 ******** 2025-04-05 12:42:56.948919 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-05 12:42:56.948927 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:56.948935 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-05 12:42:56.948943 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:56.948951 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-05 12:42:56.948959 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:56.948967 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-05 12:42:56.948975 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:56.948983 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-05 12:42:56.948991 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:56.948999 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-05 12:42:56.949007 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:56.949015 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-04-05 12:42:56.949027 | orchestrator | 2025-04-05 12:42:56.949035 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-04-05 12:42:56.949043 | orchestrator | Saturday 05 April 2025 12:40:25 +0000 (0:00:03.297) 0:01:48.321 ******** 2025-04-05 12:42:56.949051 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-05 12:42:56.949059 | orchestrator | 2025-04-05 12:42:56.949067 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-04-05 12:42:56.949075 | orchestrator | Saturday 05 April 2025 12:40:25 +0000 (0:00:00.337) 0:01:48.658 ******** 2025-04-05 12:42:56.949083 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:42:56.949091 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:56.949099 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:56.949111 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:56.949119 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:56.949127 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:56.949135 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:56.949143 | orchestrator | 2025-04-05 12:42:56.949151 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-04-05 12:42:56.949159 | orchestrator | Saturday 05 April 2025 12:40:26 +0000 (0:00:00.719) 0:01:49.378 ******** 2025-04-05 12:42:56.949167 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:42:56.949175 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:56.949182 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:56.949190 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:56.949198 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:42:56.949206 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:42:56.949214 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:42:56.949227 | orchestrator | 2025-04-05 12:42:56.949236 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-04-05 12:42:56.949244 | orchestrator | Saturday 05 April 2025 12:40:29 +0000 (0:00:03.316) 0:01:52.694 ******** 2025-04-05 12:42:56.949252 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-05 12:42:56.949260 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:56.949269 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-05 12:42:56.949277 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:56.949285 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-05 12:42:56.949293 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:56.949301 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-05 12:42:56.949309 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:56.949317 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-05 12:42:56.949325 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:56.949333 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-05 12:42:56.949341 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:56.949349 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-05 12:42:56.949357 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:42:56.949365 | orchestrator | 2025-04-05 12:42:56.949373 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-04-05 12:42:56.949381 | orchestrator | Saturday 05 April 2025 12:40:32 +0000 (0:00:02.534) 0:01:55.229 ******** 2025-04-05 12:42:56.949389 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-05 12:42:56.949397 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:56.949404 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-05 12:42:56.949417 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:56.949425 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-05 12:42:56.949433 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:56.949444 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-05 12:42:56.949452 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:56.949460 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-05 12:42:56.949468 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:56.949476 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-05 12:42:56.949484 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:56.949491 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-04-05 12:42:56.949499 | orchestrator | 2025-04-05 12:42:56.949507 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-04-05 12:42:56.949518 | orchestrator | Saturday 05 April 2025 12:40:34 +0000 (0:00:02.925) 0:01:58.154 ******** 2025-04-05 12:42:56.949527 | orchestrator | [WARNING]: Skipped 2025-04-05 12:42:56.949534 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-04-05 12:42:56.949542 | orchestrator | due to this access issue: 2025-04-05 12:42:56.949550 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-04-05 12:42:56.949558 | orchestrator | not a directory 2025-04-05 12:42:56.949566 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-05 12:42:56.949574 | orchestrator | 2025-04-05 12:42:56.949582 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-04-05 12:42:56.949590 | orchestrator | Saturday 05 April 2025 12:40:36 +0000 (0:00:01.797) 0:01:59.952 ******** 2025-04-05 12:42:56.949598 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:42:56.949606 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:56.949614 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:56.949622 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:56.949629 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:56.949637 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:56.949645 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:56.949653 | orchestrator | 2025-04-05 12:42:56.949661 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-04-05 12:42:56.949669 | orchestrator | Saturday 05 April 2025 12:40:37 +0000 (0:00:00.945) 0:02:00.897 ******** 2025-04-05 12:42:56.949677 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:42:56.949685 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:56.949695 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:56.949704 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:56.949712 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:56.949720 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:56.949727 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:56.949735 | orchestrator | 2025-04-05 12:42:56.949743 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-04-05 12:42:56.949751 | orchestrator | Saturday 05 April 2025 12:40:38 +0000 (0:00:00.765) 0:02:01.662 ******** 2025-04-05 12:42:56.949759 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-05 12:42:56.949780 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-05 12:42:56.949788 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:56.949796 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:56.949804 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-05 12:42:56.949813 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:56.949825 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-05 12:42:56.949833 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:56.949841 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-05 12:42:56.949849 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:56.949857 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-05 12:42:56.949865 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:56.949873 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-05 12:42:56.949881 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:42:56.949889 | orchestrator | 2025-04-05 12:42:56.949897 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-04-05 12:42:56.949905 | orchestrator | Saturday 05 April 2025 12:40:42 +0000 (0:00:03.754) 0:02:05.417 ******** 2025-04-05 12:42:56.949913 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-05 12:42:56.949921 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:56.949929 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-05 12:42:56.949937 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:56.949945 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-05 12:42:56.949953 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:56.949961 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-05 12:42:56.949969 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:56.949977 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-05 12:42:56.949985 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:56.949993 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-05 12:42:56.950001 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:56.950009 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-05 12:42:56.950040 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:42:56.950048 | orchestrator | 2025-04-05 12:42:56.950056 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-04-05 12:42:56.950065 | orchestrator | Saturday 05 April 2025 12:40:45 +0000 (0:00:03.088) 0:02:08.505 ******** 2025-04-05 12:42:56.950073 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-05 12:42:56.950087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.950100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.950116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.950126 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.950135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.950143 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950152 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.950178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.950187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.950195 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.950209 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.950218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.950251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-05 12:42:56.950260 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.950268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.950285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.950316 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-05 12:42:56.950326 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.950334 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950343 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.950358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.950376 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.950385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950402 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-05 12:42:56.950413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.950445 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.950458 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.950469 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.950493 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.950502 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.950511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.950523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.950552 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.950561 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950575 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.950585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.950624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.950633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.950664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.950673 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.950686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.950699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.950707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.950731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.950744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.950752 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.950776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-05 12:42:56.950802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-05 12:42:56.950811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-05 12:42:56.950823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.950832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.950852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.950876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.950899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-05 12:42:56.950916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-05 12:42:56.950941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-05 12:42:56.950950 | orchestrator | 2025-04-05 12:42:56.950958 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-04-05 12:42:56.950966 | orchestrator | Saturday 05 April 2025 12:40:50 +0000 (0:00:05.516) 0:02:14.021 ******** 2025-04-05 12:42:56.950974 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-04-05 12:42:56.950982 | orchestrator | 2025-04-05 12:42:56.950990 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-05 12:42:56.950998 | orchestrator | Saturday 05 April 2025 12:40:53 +0000 (0:00:02.292) 0:02:16.314 ******** 2025-04-05 12:42:56.951006 | orchestrator | 2025-04-05 12:42:56.951014 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-05 12:42:56.951022 | orchestrator | Saturday 05 April 2025 12:40:53 +0000 (0:00:00.041) 0:02:16.355 ******** 2025-04-05 12:42:56.951030 | orchestrator | 2025-04-05 12:42:56.951038 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-05 12:42:56.951046 | orchestrator | Saturday 05 April 2025 12:40:53 +0000 (0:00:00.038) 0:02:16.394 ******** 2025-04-05 12:42:56.951053 | orchestrator | 2025-04-05 12:42:56.951065 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-05 12:42:56.951077 | orchestrator | Saturday 05 April 2025 12:40:53 +0000 (0:00:00.143) 0:02:16.538 ******** 2025-04-05 12:42:56.951086 | orchestrator | 2025-04-05 12:42:56.951093 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-05 12:42:56.951101 | orchestrator | Saturday 05 April 2025 12:40:53 +0000 (0:00:00.039) 0:02:16.577 ******** 2025-04-05 12:42:56.951109 | orchestrator | 2025-04-05 12:42:56.951117 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-05 12:42:56.951125 | orchestrator | Saturday 05 April 2025 12:40:53 +0000 (0:00:00.037) 0:02:16.614 ******** 2025-04-05 12:42:56.951132 | orchestrator | 2025-04-05 12:42:56.951140 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-05 12:42:56.951148 | orchestrator | Saturday 05 April 2025 12:40:53 +0000 (0:00:00.040) 0:02:16.655 ******** 2025-04-05 12:42:56.951156 | orchestrator | 2025-04-05 12:42:56.951164 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-04-05 12:42:56.951172 | orchestrator | Saturday 05 April 2025 12:40:53 +0000 (0:00:00.145) 0:02:16.800 ******** 2025-04-05 12:42:56.951179 | orchestrator | changed: [testbed-manager] 2025-04-05 12:42:56.951187 | orchestrator | 2025-04-05 12:42:56.951198 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-04-05 12:42:56.951206 | orchestrator | Saturday 05 April 2025 12:41:10 +0000 (0:00:16.506) 0:02:33.307 ******** 2025-04-05 12:42:56.951214 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:42:56.951222 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:42:56.951230 | orchestrator | changed: [testbed-manager] 2025-04-05 12:42:56.951238 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:42:56.951246 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:42:56.951253 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:42:56.951265 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:42:56.951273 | orchestrator | 2025-04-05 12:42:56.951281 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-04-05 12:42:56.951289 | orchestrator | Saturday 05 April 2025 12:41:29 +0000 (0:00:19.541) 0:02:52.849 ******** 2025-04-05 12:42:56.951297 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:42:56.951304 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:42:56.951312 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:42:56.951320 | orchestrator | 2025-04-05 12:42:56.951328 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-04-05 12:42:56.951336 | orchestrator | Saturday 05 April 2025 12:41:45 +0000 (0:00:16.239) 0:03:09.089 ******** 2025-04-05 12:42:56.951344 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:42:56.951352 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:42:56.951360 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:42:56.951368 | orchestrator | 2025-04-05 12:42:56.951376 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-04-05 12:42:56.951384 | orchestrator | Saturday 05 April 2025 12:41:58 +0000 (0:00:12.382) 0:03:21.471 ******** 2025-04-05 12:42:56.951392 | orchestrator | changed: [testbed-manager] 2025-04-05 12:42:56.951400 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:42:56.951408 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:42:56.951415 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:42:56.951423 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:42:56.951431 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:42:56.951439 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:42:56.951447 | orchestrator | 2025-04-05 12:42:56.951455 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-04-05 12:42:56.951463 | orchestrator | Saturday 05 April 2025 12:42:16 +0000 (0:00:18.270) 0:03:39.742 ******** 2025-04-05 12:42:56.951471 | orchestrator | changed: [testbed-manager] 2025-04-05 12:42:56.951479 | orchestrator | 2025-04-05 12:42:56.951487 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-04-05 12:42:56.951495 | orchestrator | Saturday 05 April 2025 12:42:25 +0000 (0:00:09.409) 0:03:49.152 ******** 2025-04-05 12:42:56.951507 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:42:56.951515 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:42:56.951523 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:42:56.951531 | orchestrator | 2025-04-05 12:42:56.951542 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-04-05 12:42:59.972179 | orchestrator | Saturday 05 April 2025 12:42:38 +0000 (0:00:12.489) 0:04:01.641 ******** 2025-04-05 12:42:59.972326 | orchestrator | changed: [testbed-manager] 2025-04-05 12:42:59.972339 | orchestrator | 2025-04-05 12:42:59.972345 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-04-05 12:42:59.972350 | orchestrator | Saturday 05 April 2025 12:42:46 +0000 (0:00:07.720) 0:04:09.361 ******** 2025-04-05 12:42:59.972355 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:42:59.972361 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:42:59.972366 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:42:59.972371 | orchestrator | 2025-04-05 12:42:59.972377 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:42:59.972383 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-04-05 12:42:59.972390 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-04-05 12:42:59.972395 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-04-05 12:42:59.972400 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-04-05 12:42:59.972405 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-05 12:42:59.972410 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-05 12:42:59.972415 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-05 12:42:59.972420 | orchestrator | 2025-04-05 12:42:59.972424 | orchestrator | 2025-04-05 12:42:59.972429 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:42:59.972434 | orchestrator | Saturday 05 April 2025 12:42:56 +0000 (0:00:09.899) 0:04:19.260 ******** 2025-04-05 12:42:59.972439 | orchestrator | =============================================================================== 2025-04-05 12:42:59.972444 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 60.38s 2025-04-05 12:42:59.972449 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 19.54s 2025-04-05 12:42:59.972454 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 18.27s 2025-04-05 12:42:59.972459 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 16.51s 2025-04-05 12:42:59.972463 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 16.24s 2025-04-05 12:42:59.972468 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 12.94s 2025-04-05 12:42:59.972473 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 12.49s 2025-04-05 12:42:59.972478 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 12.38s 2025-04-05 12:42:59.972483 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.90s 2025-04-05 12:42:59.972488 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.41s 2025-04-05 12:42:59.972492 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 7.72s 2025-04-05 12:42:59.972497 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.22s 2025-04-05 12:42:59.972530 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.68s 2025-04-05 12:42:59.972535 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.52s 2025-04-05 12:42:59.972540 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.35s 2025-04-05 12:42:59.972545 | orchestrator | prometheus : Copying over prometheus msteams config file ---------------- 3.75s 2025-04-05 12:42:59.972550 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.72s 2025-04-05 12:42:59.972554 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.32s 2025-04-05 12:42:59.972559 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.30s 2025-04-05 12:42:59.972564 | orchestrator | prometheus : Copying over prometheus msteams template file -------------- 3.09s 2025-04-05 12:42:59.972569 | orchestrator | 2025-04-05 12:42:56 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:59.972575 | orchestrator | 2025-04-05 12:42:56 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state STARTED 2025-04-05 12:42:59.972580 | orchestrator | 2025-04-05 12:42:56 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:42:59.972597 | orchestrator | 2025-04-05 12:42:59 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:42:59.973156 | orchestrator | 2025-04-05 12:42:59 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:42:59.973172 | orchestrator | 2025-04-05 12:42:59 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:42:59.975817 | orchestrator | 2025-04-05 12:42:59 | INFO  | Task 4d4f2fdc-176f-49df-a5f3-d03d20895ab2 is in state SUCCESS 2025-04-05 12:42:59.977054 | orchestrator | 2025-04-05 12:42:59.977070 | orchestrator | 2025-04-05 12:42:59.977076 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:42:59.977082 | orchestrator | 2025-04-05 12:42:59.977087 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:42:59.977093 | orchestrator | Saturday 05 April 2025 12:40:25 +0000 (0:00:00.198) 0:00:00.198 ******** 2025-04-05 12:42:59.977099 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:42:59.977105 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:42:59.977111 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:42:59.977116 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:42:59.977122 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:42:59.977128 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:42:59.977134 | orchestrator | 2025-04-05 12:42:59.977139 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:42:59.977145 | orchestrator | Saturday 05 April 2025 12:40:26 +0000 (0:00:00.576) 0:00:00.774 ******** 2025-04-05 12:42:59.977150 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-04-05 12:42:59.977156 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-04-05 12:42:59.977161 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-04-05 12:42:59.977166 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-04-05 12:42:59.977171 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-04-05 12:42:59.977176 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-04-05 12:42:59.977181 | orchestrator | 2025-04-05 12:42:59.977186 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-04-05 12:42:59.977190 | orchestrator | 2025-04-05 12:42:59.977195 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-05 12:42:59.977200 | orchestrator | Saturday 05 April 2025 12:40:27 +0000 (0:00:01.016) 0:00:01.790 ******** 2025-04-05 12:42:59.977205 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:42:59.977234 | orchestrator | 2025-04-05 12:42:59.977241 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-04-05 12:42:59.977246 | orchestrator | Saturday 05 April 2025 12:40:28 +0000 (0:00:01.377) 0:00:03.168 ******** 2025-04-05 12:42:59.977251 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-04-05 12:42:59.977256 | orchestrator | 2025-04-05 12:42:59.977261 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-04-05 12:42:59.977266 | orchestrator | Saturday 05 April 2025 12:40:31 +0000 (0:00:02.913) 0:00:06.081 ******** 2025-04-05 12:42:59.977271 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-04-05 12:42:59.977276 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-04-05 12:42:59.977281 | orchestrator | 2025-04-05 12:42:59.977286 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-04-05 12:42:59.977291 | orchestrator | Saturday 05 April 2025 12:40:37 +0000 (0:00:05.778) 0:00:11.860 ******** 2025-04-05 12:42:59.977296 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-05 12:42:59.977301 | orchestrator | 2025-04-05 12:42:59.977306 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-04-05 12:42:59.977311 | orchestrator | Saturday 05 April 2025 12:40:40 +0000 (0:00:03.198) 0:00:15.059 ******** 2025-04-05 12:42:59.977316 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-05 12:42:59.977321 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-04-05 12:42:59.977326 | orchestrator | 2025-04-05 12:42:59.977331 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-04-05 12:42:59.977336 | orchestrator | Saturday 05 April 2025 12:40:44 +0000 (0:00:03.491) 0:00:18.550 ******** 2025-04-05 12:42:59.977341 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-05 12:42:59.977346 | orchestrator | 2025-04-05 12:42:59.977351 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-04-05 12:42:59.977356 | orchestrator | Saturday 05 April 2025 12:40:47 +0000 (0:00:02.944) 0:00:21.494 ******** 2025-04-05 12:42:59.977361 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-04-05 12:42:59.977366 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-04-05 12:42:59.977370 | orchestrator | 2025-04-05 12:42:59.977375 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-04-05 12:42:59.977380 | orchestrator | Saturday 05 April 2025 12:40:54 +0000 (0:00:06.950) 0:00:28.444 ******** 2025-04-05 12:42:59.977387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.977401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.977413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-05 12:42:59.977419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.977424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.977429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-05 12:42:59.977462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-05 12:42:59.977473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.977478 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.977484 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.977489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.977503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.977513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.977519 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.977524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.977529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.977538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.977547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.977556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.977561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.977739 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.977755 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.977782 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.977793 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.977798 | orchestrator | 2025-04-05 12:42:59.977803 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-05 12:42:59.977808 | orchestrator | Saturday 05 April 2025 12:40:56 +0000 (0:00:01.958) 0:00:30.403 ******** 2025-04-05 12:42:59.977813 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:59.977819 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:59.977824 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:59.977829 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:42:59.977834 | orchestrator | 2025-04-05 12:42:59.977842 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-04-05 12:42:59.977848 | orchestrator | Saturday 05 April 2025 12:40:57 +0000 (0:00:01.569) 0:00:31.972 ******** 2025-04-05 12:42:59.977853 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-04-05 12:42:59.977858 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-04-05 12:42:59.977865 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-04-05 12:42:59.977870 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-04-05 12:42:59.977875 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-04-05 12:42:59.977880 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-04-05 12:42:59.977885 | orchestrator | 2025-04-05 12:42:59.977890 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-04-05 12:42:59.977895 | orchestrator | Saturday 05 April 2025 12:41:00 +0000 (0:00:02.394) 0:00:34.367 ******** 2025-04-05 12:42:59.977901 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-05 12:42:59.977908 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-05 12:42:59.977931 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-05 12:42:59.977937 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-05 12:42:59.977942 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-05 12:42:59.977947 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-05 12:42:59.977953 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-05 12:42:59.977966 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-05 12:42:59.977977 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-05 12:42:59.977983 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-05 12:42:59.977989 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-05 12:42:59.977994 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-05 12:42:59.978002 | orchestrator | 2025-04-05 12:42:59.978008 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-04-05 12:42:59.978050 | orchestrator | Saturday 05 April 2025 12:41:03 +0000 (0:00:03.700) 0:00:38.068 ******** 2025-04-05 12:42:59.978056 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-04-05 12:42:59.978061 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-04-05 12:42:59.978066 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-04-05 12:42:59.978071 | orchestrator | 2025-04-05 12:42:59.978076 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-04-05 12:42:59.978084 | orchestrator | Saturday 05 April 2025 12:41:05 +0000 (0:00:01.711) 0:00:39.779 ******** 2025-04-05 12:42:59.978089 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-04-05 12:42:59.978094 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-04-05 12:42:59.978099 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-04-05 12:42:59.978104 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-04-05 12:42:59.978109 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-04-05 12:42:59.978114 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-04-05 12:42:59.978118 | orchestrator | 2025-04-05 12:42:59.978123 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-04-05 12:42:59.978128 | orchestrator | Saturday 05 April 2025 12:41:08 +0000 (0:00:02.945) 0:00:42.724 ******** 2025-04-05 12:42:59.978133 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-04-05 12:42:59.978138 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-04-05 12:42:59.978143 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-04-05 12:42:59.978148 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-04-05 12:42:59.978152 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-04-05 12:42:59.978157 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-04-05 12:42:59.978162 | orchestrator | 2025-04-05 12:42:59.978167 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-04-05 12:42:59.978172 | orchestrator | Saturday 05 April 2025 12:41:09 +0000 (0:00:00.987) 0:00:43.712 ******** 2025-04-05 12:42:59.978177 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:59.978182 | orchestrator | 2025-04-05 12:42:59.978187 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-04-05 12:42:59.978191 | orchestrator | Saturday 05 April 2025 12:41:09 +0000 (0:00:00.173) 0:00:43.886 ******** 2025-04-05 12:42:59.978196 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:59.978201 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:59.978206 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:59.978211 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:59.978216 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:59.978220 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:59.978225 | orchestrator | 2025-04-05 12:42:59.978230 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-05 12:42:59.978235 | orchestrator | Saturday 05 April 2025 12:41:11 +0000 (0:00:01.690) 0:00:45.576 ******** 2025-04-05 12:42:59.978240 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:42:59.978249 | orchestrator | 2025-04-05 12:42:59.978254 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-04-05 12:42:59.978259 | orchestrator | Saturday 05 April 2025 12:41:14 +0000 (0:00:02.919) 0:00:48.495 ******** 2025-04-05 12:42:59.978264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-05 12:42:59.978269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-05 12:42:59.978284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.978290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.978295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.978303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-05 12:42:59.978308 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.978320 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.978326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.978331 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.978343 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.978349 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.978354 | orchestrator | 2025-04-05 12:42:59.978359 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-04-05 12:42:59.978364 | orchestrator | Saturday 05 April 2025 12:41:18 +0000 (0:00:04.508) 0:00:53.004 ******** 2025-04-05 12:42:59.978369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.978377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.978383 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:59.978388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.978396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.978401 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:59.978406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.978415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.978420 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:59.978429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.978435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.978443 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:59.978448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.978457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.978462 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:59.978467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.978474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.978480 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:59.978485 | orchestrator | 2025-04-05 12:42:59.978490 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-04-05 12:42:59.978495 | orchestrator | Saturday 05 April 2025 12:41:21 +0000 (0:00:02.929) 0:00:55.934 ******** 2025-04-05 12:42:59.978499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.978509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.978514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.978523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.978528 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:59.978533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.978541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.978551 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:59.978556 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:59.978561 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.978570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.978576 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:59.978581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.978586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.978591 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:59.978599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979257 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:59.979263 | orchestrator | 2025-04-05 12:42:59.979268 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-04-05 12:42:59.979273 | orchestrator | Saturday 05 April 2025 12:41:23 +0000 (0:00:01.992) 0:00:57.927 ******** 2025-04-05 12:42:59.979278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.979284 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.979299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-05 12:42:59.979310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.979326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-05 12:42:59.979340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-05 12:42:59.979348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.979359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.979364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979377 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.979389 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.979395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.979400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.979427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979461 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.979467 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.979472 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.979477 | orchestrator | 2025-04-05 12:42:59.979483 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-04-05 12:42:59.979488 | orchestrator | Saturday 05 April 2025 12:41:26 +0000 (0:00:03.053) 0:01:00.980 ******** 2025-04-05 12:42:59.979497 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-04-05 12:42:59.979502 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:59.979513 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-04-05 12:42:59.979519 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:59.979526 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-04-05 12:42:59.979532 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:59.979537 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-04-05 12:42:59.979542 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-04-05 12:42:59.979550 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-04-05 12:42:59.979624 | orchestrator | 2025-04-05 12:42:59.979630 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-04-05 12:42:59.979635 | orchestrator | Saturday 05 April 2025 12:41:29 +0000 (0:00:02.572) 0:01:03.553 ******** 2025-04-05 12:42:59.979640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.979646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.979664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.979683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-05 12:42:59.979699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-05 12:42:59.979705 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.979715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-05 12:42:59.979729 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.979735 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.979741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.979747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.979793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.979809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979832 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.979838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.979844 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.979850 | orchestrator | 2025-04-05 12:42:59.979855 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-04-05 12:42:59.979864 | orchestrator | Saturday 05 April 2025 12:41:41 +0000 (0:00:12.217) 0:01:15.770 ******** 2025-04-05 12:42:59.979869 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:59.979875 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:59.979880 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:59.979889 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:42:59.979894 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:42:59.979899 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:42:59.979904 | orchestrator | 2025-04-05 12:42:59.979910 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-04-05 12:42:59.979915 | orchestrator | Saturday 05 April 2025 12:41:44 +0000 (0:00:02.682) 0:01:18.452 ******** 2025-04-05 12:42:59.979921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.979926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979951 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:59.979956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.979965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.979993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.979998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980022 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:59.980027 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:59.980036 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.980042 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.980056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980086 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:59.980091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980099 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:59.980105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.980111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980135 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:59.980140 | orchestrator | 2025-04-05 12:42:59.980145 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-04-05 12:42:59.980151 | orchestrator | Saturday 05 April 2025 12:41:45 +0000 (0:00:01.879) 0:01:20.332 ******** 2025-04-05 12:42:59.980156 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:59.980161 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:59.980166 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:59.980171 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:59.980177 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:59.980182 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:59.980187 | orchestrator | 2025-04-05 12:42:59.980192 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-04-05 12:42:59.980201 | orchestrator | Saturday 05 April 2025 12:41:47 +0000 (0:00:01.319) 0:01:21.652 ******** 2025-04-05 12:42:59.980206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.980212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-05 12:42:59.980229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-05 12:42:59.980235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.980244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-05 12:42:59.980255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-05 12:42:59.980274 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.980283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.980294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.980316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980331 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.980342 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.980349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.980355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-05 12:42:59.980374 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.980384 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.980391 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-05 12:42:59.980397 | orchestrator | 2025-04-05 12:42:59.980402 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-05 12:42:59.980408 | orchestrator | Saturday 05 April 2025 12:41:50 +0000 (0:00:03.362) 0:01:25.014 ******** 2025-04-05 12:42:59.980414 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:59.980420 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:42:59.980426 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:42:59.980432 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:42:59.980438 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:42:59.980443 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:42:59.980449 | orchestrator | 2025-04-05 12:42:59.980455 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-04-05 12:42:59.980461 | orchestrator | Saturday 05 April 2025 12:41:51 +0000 (0:00:00.704) 0:01:25.718 ******** 2025-04-05 12:42:59.980466 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:42:59.980472 | orchestrator | 2025-04-05 12:42:59.980478 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-04-05 12:42:59.980484 | orchestrator | Saturday 05 April 2025 12:41:53 +0000 (0:00:01.840) 0:01:27.559 ******** 2025-04-05 12:42:59.980489 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:42:59.980495 | orchestrator | 2025-04-05 12:42:59.980501 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-04-05 12:42:59.980507 | orchestrator | Saturday 05 April 2025 12:41:55 +0000 (0:00:02.178) 0:01:29.737 ******** 2025-04-05 12:42:59.980516 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:42:59.980521 | orchestrator | 2025-04-05 12:42:59.980527 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-05 12:42:59.980533 | orchestrator | Saturday 05 April 2025 12:42:07 +0000 (0:00:12.425) 0:01:42.162 ******** 2025-04-05 12:42:59.980539 | orchestrator | 2025-04-05 12:42:59.980545 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-05 12:42:59.980551 | orchestrator | Saturday 05 April 2025 12:42:07 +0000 (0:00:00.163) 0:01:42.326 ******** 2025-04-05 12:42:59.980557 | orchestrator | 2025-04-05 12:42:59.980562 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-05 12:42:59.980568 | orchestrator | Saturday 05 April 2025 12:42:08 +0000 (0:00:00.174) 0:01:42.500 ******** 2025-04-05 12:42:59.980574 | orchestrator | 2025-04-05 12:42:59.980582 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-05 12:42:59.980588 | orchestrator | Saturday 05 April 2025 12:42:08 +0000 (0:00:00.258) 0:01:42.759 ******** 2025-04-05 12:42:59.980594 | orchestrator | 2025-04-05 12:42:59.980600 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-05 12:42:59.980606 | orchestrator | Saturday 05 April 2025 12:42:08 +0000 (0:00:00.066) 0:01:42.826 ******** 2025-04-05 12:42:59.980612 | orchestrator | 2025-04-05 12:42:59.980618 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-05 12:42:59.980623 | orchestrator | Saturday 05 April 2025 12:42:08 +0000 (0:00:00.071) 0:01:42.897 ******** 2025-04-05 12:42:59.980629 | orchestrator | 2025-04-05 12:42:59.980634 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-04-05 12:42:59.980639 | orchestrator | Saturday 05 April 2025 12:42:08 +0000 (0:00:00.066) 0:01:42.964 ******** 2025-04-05 12:42:59.980644 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:42:59.980650 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:42:59.980655 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:42:59.980660 | orchestrator | 2025-04-05 12:42:59.980665 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-04-05 12:42:59.980671 | orchestrator | Saturday 05 April 2025 12:42:27 +0000 (0:00:19.000) 0:02:01.964 ******** 2025-04-05 12:42:59.980676 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:42:59.980681 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:42:59.980687 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:42:59.980692 | orchestrator | 2025-04-05 12:42:59.980700 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-04-05 12:42:59.980705 | orchestrator | Saturday 05 April 2025 12:42:33 +0000 (0:00:05.832) 0:02:07.796 ******** 2025-04-05 12:42:59.980711 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:42:59.980716 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:42:59.980721 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:42:59.980726 | orchestrator | 2025-04-05 12:42:59.980732 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-04-05 12:42:59.980737 | orchestrator | Saturday 05 April 2025 12:42:50 +0000 (0:00:17.171) 0:02:24.968 ******** 2025-04-05 12:42:59.980742 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:42:59.980747 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:42:59.980752 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:42:59.980758 | orchestrator | 2025-04-05 12:42:59.980777 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-04-05 12:42:59.980784 | orchestrator | Saturday 05 April 2025 12:42:57 +0000 (0:00:06.660) 0:02:31.629 ******** 2025-04-05 12:42:59.980789 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:42:59.980794 | orchestrator | 2025-04-05 12:42:59.980799 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:42:59.980805 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-04-05 12:42:59.980810 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-05 12:42:59.980820 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-05 12:42:59.980825 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-05 12:42:59.980831 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-05 12:42:59.980836 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-05 12:42:59.980841 | orchestrator | 2025-04-05 12:42:59.980847 | orchestrator | 2025-04-05 12:42:59.980852 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:42:59.980857 | orchestrator | Saturday 05 April 2025 12:42:58 +0000 (0:00:00.885) 0:02:32.515 ******** 2025-04-05 12:42:59.980862 | orchestrator | =============================================================================== 2025-04-05 12:42:59.980867 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 19.00s 2025-04-05 12:42:59.980873 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 17.17s 2025-04-05 12:42:59.980878 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 12.43s 2025-04-05 12:42:59.980883 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.22s 2025-04-05 12:42:59.980888 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.95s 2025-04-05 12:42:59.980894 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.66s 2025-04-05 12:42:59.980899 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.83s 2025-04-05 12:42:59.980904 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.78s 2025-04-05 12:42:59.980909 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.51s 2025-04-05 12:42:59.980914 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.70s 2025-04-05 12:42:59.980919 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.49s 2025-04-05 12:42:59.980925 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.36s 2025-04-05 12:42:59.980932 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.20s 2025-04-05 12:43:02.998687 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.05s 2025-04-05 12:43:02.998931 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.95s 2025-04-05 12:43:02.998957 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 2.94s 2025-04-05 12:43:02.998972 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS certificate --- 2.93s 2025-04-05 12:43:02.998987 | orchestrator | cinder : include_tasks -------------------------------------------------- 2.92s 2025-04-05 12:43:02.999001 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 2.91s 2025-04-05 12:43:02.999016 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.68s 2025-04-05 12:43:02.999030 | orchestrator | 2025-04-05 12:42:59 | INFO  | Task 4518e32b-fbbb-4df2-a712-553ff00ec5f9 is in state STARTED 2025-04-05 12:43:02.999044 | orchestrator | 2025-04-05 12:42:59 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:43:02.999075 | orchestrator | 2025-04-05 12:43:02 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:43:02.999437 | orchestrator | 2025-04-05 12:43:02 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:43:02.999471 | orchestrator | 2025-04-05 12:43:02 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:43:03.003724 | orchestrator | 2025-04-05 12:43:03 | INFO  | Task 4518e32b-fbbb-4df2-a712-553ff00ec5f9 is in state STARTED 2025-04-05 12:43:06.057359 | orchestrator | 2025-04-05 12:43:03 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:43:06.057474 | orchestrator | 2025-04-05 12:43:06 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:43:06.057940 | orchestrator | 2025-04-05 12:43:06 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:43:06.057961 | orchestrator | 2025-04-05 12:43:06 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:43:06.058637 | orchestrator | 2025-04-05 12:43:06 | INFO  | Task 4518e32b-fbbb-4df2-a712-553ff00ec5f9 is in state STARTED 2025-04-05 12:43:09.094100 | orchestrator | 2025-04-05 12:43:06 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:43:09.094219 | orchestrator | 2025-04-05 12:43:09 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:43:09.095365 | orchestrator | 2025-04-05 12:43:09 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:43:09.096417 | orchestrator | 2025-04-05 12:43:09 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:43:09.097726 | orchestrator | 2025-04-05 12:43:09 | INFO  | Task 4518e32b-fbbb-4df2-a712-553ff00ec5f9 is in state STARTED 2025-04-05 12:43:12.143715 | orchestrator | 2025-04-05 12:43:09 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:43:12.143898 | orchestrator | 2025-04-05 12:43:12 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:43:12.145233 | orchestrator | 2025-04-05 12:43:12[0m | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:43:12.146933 | orchestrator | 2025-04-05 12:43:12 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:43:12.148844 | orchestrator | 2025-04-05 12:43:12 | INFO  | Task 4518e32b-fbbb-4df2-a712-553ff00ec5f9 is in state STARTED 2025-04-05 12:43:15.193040 | orchestrator | 2025-04-05 12:43:12 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:43:15.193182 | orchestrator | 2025-04-05 12:43:15 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:43:15.195880 | orchestrator | 2025-04-05 12:43:15 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state STARTED 2025-04-05 12:43:15.196888 | orchestrator | 2025-04-05 12:43:15 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:43:15.200878 | orchestrator | 2025-04-05 12:43:15 | INFO  | Task 4518e32b-fbbb-4df2-a712-553ff00ec5f9 is in state STARTED 2025-04-05 12:43:18.256090 | orchestrator | 2025-04-05 12:43:15 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:43:18.256219 | orchestrator | 2025-04-05 12:43:18 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:43:18.257598 | orchestrator | 2025-04-05 12:43:18 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:43:18.257635 | orchestrator | 2025-04-05 12:43:18 | INFO  | Task a564ad1b-cf82-41bb-b046-2327c1911202 is in state SUCCESS 2025-04-05 12:43:18.259037 | orchestrator | 2025-04-05 12:43:18.259074 | orchestrator | 2025-04-05 12:43:18.259089 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:43:18.259105 | orchestrator | 2025-04-05 12:43:18.259120 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:43:18.259151 | orchestrator | Saturday 05 April 2025 12:40:17 +0000 (0:00:00.192) 0:00:00.192 ******** 2025-04-05 12:43:18.259610 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:43:18.259630 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:43:18.259644 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:43:18.259658 | orchestrator | 2025-04-05 12:43:18.259672 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:43:18.259686 | orchestrator | Saturday 05 April 2025 12:40:17 +0000 (0:00:00.282) 0:00:00.474 ******** 2025-04-05 12:43:18.259700 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-04-05 12:43:18.259714 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-04-05 12:43:18.259728 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-04-05 12:43:18.259742 | orchestrator | 2025-04-05 12:43:18.259756 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-04-05 12:43:18.259770 | orchestrator | 2025-04-05 12:43:18.259810 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-05 12:43:18.259825 | orchestrator | Saturday 05 April 2025 12:40:17 +0000 (0:00:00.321) 0:00:00.796 ******** 2025-04-05 12:43:18.259840 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:43:18.259855 | orchestrator | 2025-04-05 12:43:18.259869 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-04-05 12:43:18.259883 | orchestrator | Saturday 05 April 2025 12:40:18 +0000 (0:00:01.100) 0:00:01.896 ******** 2025-04-05 12:43:18.259898 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-04-05 12:43:18.260065 | orchestrator | 2025-04-05 12:43:18.260081 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-04-05 12:43:18.260098 | orchestrator | Saturday 05 April 2025 12:40:21 +0000 (0:00:02.831) 0:00:04.727 ******** 2025-04-05 12:43:18.260119 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-04-05 12:43:18.260134 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-04-05 12:43:18.260148 | orchestrator | 2025-04-05 12:43:18.260162 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-04-05 12:43:18.260176 | orchestrator | Saturday 05 April 2025 12:40:26 +0000 (0:00:05.289) 0:00:10.016 ******** 2025-04-05 12:43:18.260190 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-05 12:43:18.260204 | orchestrator | 2025-04-05 12:43:18.260218 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-04-05 12:43:18.260231 | orchestrator | Saturday 05 April 2025 12:40:29 +0000 (0:00:02.864) 0:00:12.881 ******** 2025-04-05 12:43:18.260245 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-05 12:43:18.260259 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-04-05 12:43:18.260272 | orchestrator | 2025-04-05 12:43:18.260286 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-04-05 12:43:18.260300 | orchestrator | Saturday 05 April 2025 12:40:33 +0000 (0:00:03.332) 0:00:16.213 ******** 2025-04-05 12:43:18.260313 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-05 12:43:18.260327 | orchestrator | 2025-04-05 12:43:18.260341 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-04-05 12:43:18.260355 | orchestrator | Saturday 05 April 2025 12:40:35 +0000 (0:00:02.773) 0:00:18.987 ******** 2025-04-05 12:43:18.260369 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-04-05 12:43:18.260382 | orchestrator | 2025-04-05 12:43:18.260396 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-04-05 12:43:18.260410 | orchestrator | Saturday 05 April 2025 12:40:39 +0000 (0:00:04.062) 0:00:23.049 ******** 2025-04-05 12:43:18.260477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-05 12:43:18.260515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-05 12:43:18.260544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-05 12:43:18.260579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-05 12:43:18.260607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-05 12:43:18.260639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-05 12:43:18.260667 | orchestrator | 2025-04-05 12:43:18.260682 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-05 12:43:18.260699 | orchestrator | Saturday 05 April 2025 12:40:45 +0000 (0:00:05.229) 0:00:28.279 ******** 2025-04-05 12:43:18.260715 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:43:18.260730 | orchestrator | 2025-04-05 12:43:18.260746 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-04-05 12:43:18.260761 | orchestrator | Saturday 05 April 2025 12:40:45 +0000 (0:00:00.559) 0:00:28.838 ******** 2025-04-05 12:43:18.260808 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:43:18.260824 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:43:18.260841 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:43:18.260857 | orchestrator | 2025-04-05 12:43:18.260872 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-04-05 12:43:18.260888 | orchestrator | Saturday 05 April 2025 12:40:51 +0000 (0:00:06.060) 0:00:34.899 ******** 2025-04-05 12:43:18.260903 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-05 12:43:18.260917 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-05 12:43:18.260932 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-05 12:43:18.260946 | orchestrator | 2025-04-05 12:43:18.260960 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-04-05 12:43:18.260982 | orchestrator | Saturday 05 April 2025 12:40:53 +0000 (0:00:01.375) 0:00:36.275 ******** 2025-04-05 12:43:18.260996 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-05 12:43:18.261010 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-05 12:43:18.261024 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-05 12:43:18.261038 | orchestrator | 2025-04-05 12:43:18.261052 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-04-05 12:43:18.261066 | orchestrator | Saturday 05 April 2025 12:40:54 +0000 (0:00:00.962) 0:00:37.237 ******** 2025-04-05 12:43:18.261080 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:43:18.261095 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:43:18.261109 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:43:18.261129 | orchestrator | 2025-04-05 12:43:18.261143 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-04-05 12:43:18.261157 | orchestrator | Saturday 05 April 2025 12:40:54 +0000 (0:00:00.596) 0:00:37.833 ******** 2025-04-05 12:43:18.261171 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:43:18.261185 | orchestrator | 2025-04-05 12:43:18.261203 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-04-05 12:43:18.261218 | orchestrator | Saturday 05 April 2025 12:40:54 +0000 (0:00:00.112) 0:00:37.946 ******** 2025-04-05 12:43:18.261232 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:43:18.261246 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:43:18.261260 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:43:18.261274 | orchestrator | 2025-04-05 12:43:18.261287 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-05 12:43:18.261301 | orchestrator | Saturday 05 April 2025 12:40:55 +0000 (0:00:00.332) 0:00:38.278 ******** 2025-04-05 12:43:18.261315 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:43:18.261329 | orchestrator | 2025-04-05 12:43:18.261343 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-04-05 12:43:18.261357 | orchestrator | Saturday 05 April 2025 12:40:55 +0000 (0:00:00.584) 0:00:38.863 ******** 2025-04-05 12:43:18.261380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-05 12:43:18.261404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-05 12:43:18.261449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-05 12:43:18.261465 | orchestrator | 2025-04-05 12:43:18.261480 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-04-05 12:43:18.261493 | orchestrator | Saturday 05 April 2025 12:40:59 +0000 (0:00:04.179) 0:00:43.043 ******** 2025-04-05 12:43:18.261508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-05 12:43:18.261540 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:43:18.261563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-05 12:43:18.261579 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:43:18.261594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-05 12:43:18.261626 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:43:18.261640 | orchestrator | 2025-04-05 12:43:18.261654 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-04-05 12:43:18.261668 | orchestrator | Saturday 05 April 2025 12:41:03 +0000 (0:00:03.966) 0:00:47.009 ******** 2025-04-05 12:43:18.261682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-05 12:43:18.261703 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:43:18.261719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-05 12:43:18.261740 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:43:18.261755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-05 12:43:18.261827 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:43:18.261843 | orchestrator | 2025-04-05 12:43:18.261857 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-04-05 12:43:18.261871 | orchestrator | Saturday 05 April 2025 12:41:07 +0000 (0:00:03.912) 0:00:50.922 ******** 2025-04-05 12:43:18.261885 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:43:18.261899 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:43:18.261913 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:43:18.262266 | orchestrator | 2025-04-05 12:43:18.262289 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-04-05 12:43:18.262303 | orchestrator | Saturday 05 April 2025 12:41:13 +0000 (0:00:05.527) 0:00:56.449 ******** 2025-04-05 12:43:18.262336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-05 12:43:18.262364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-05 12:43:18.262388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-05 12:43:18.262411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-05 12:43:18.262434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-05 12:43:18.262450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-05 12:43:18.262472 | orchestrator | 2025-04-05 12:43:18.262487 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-04-05 12:43:18.262501 | orchestrator | Saturday 05 April 2025 12:41:22 +0000 (0:00:09.036) 0:01:05.485 ******** 2025-04-05 12:43:18.262515 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:43:18.262529 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:43:18.262543 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:43:18.262557 | orchestrator | 2025-04-05 12:43:18.262571 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-04-05 12:43:18.262585 | orchestrator | Saturday 05 April 2025 12:41:32 +0000 (0:00:10.396) 0:01:15.881 ******** 2025-04-05 12:43:18.262599 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:43:18.262613 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:43:18.262627 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:43:18.262640 | orchestrator | 2025-04-05 12:43:18.262654 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-04-05 12:43:18.262668 | orchestrator | Saturday 05 April 2025 12:41:43 +0000 (0:00:10.963) 0:01:26.845 ******** 2025-04-05 12:43:18.262682 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:43:18.262696 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:43:18.262710 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:43:18.262724 | orchestrator | 2025-04-05 12:43:18.262738 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-04-05 12:43:18.262752 | orchestrator | Saturday 05 April 2025 12:41:51 +0000 (0:00:07.986) 0:01:34.831 ******** 2025-04-05 12:43:18.262766 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:43:18.262837 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:43:18.262853 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:43:18.262867 | orchestrator | 2025-04-05 12:43:18.262881 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-04-05 12:43:18.262895 | orchestrator | Saturday 05 April 2025 12:41:56 +0000 (0:00:04.732) 0:01:39.564 ******** 2025-04-05 12:43:18.262909 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:43:18.262923 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:43:18.262936 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:43:18.262958 | orchestrator | 2025-04-05 12:43:18.262972 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-04-05 12:43:18.262986 | orchestrator | Saturday 05 April 2025 12:42:05 +0000 (0:00:09.361) 0:01:48.925 ******** 2025-04-05 12:43:18.263000 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:43:18.263013 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:43:18.263027 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:43:18.263041 | orchestrator | 2025-04-05 12:43:18.263061 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-04-05 12:43:18.263076 | orchestrator | Saturday 05 April 2025 12:42:06 +0000 (0:00:00.351) 0:01:49.277 ******** 2025-04-05 12:43:18.263090 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-04-05 12:43:18.263104 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:43:18.263118 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-04-05 12:43:18.263132 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:43:18.263146 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-04-05 12:43:18.263160 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:43:18.263173 | orchestrator | 2025-04-05 12:43:18.263187 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-04-05 12:43:18.263201 | orchestrator | Saturday 05 April 2025 12:42:10 +0000 (0:00:04.588) 0:01:53.865 ******** 2025-04-05 12:43:18.263216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-05 12:43:18.263238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-05 12:43:18.263261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-05 12:43:18.263277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-05 12:43:18.263308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-05 12:43:18.263322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-05 12:43:18.263341 | orchestrator | 2025-04-05 12:43:18.263354 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-05 12:43:18.263367 | orchestrator | Saturday 05 April 2025 12:42:14 +0000 (0:00:04.030) 0:01:57.896 ******** 2025-04-05 12:43:18.263379 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:43:18.263392 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:43:18.263404 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:43:18.263416 | orchestrator | 2025-04-05 12:43:18.263429 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-04-05 12:43:18.263442 | orchestrator | Saturday 05 April 2025 12:42:15 +0000 (0:00:00.247) 0:01:58.143 ******** 2025-04-05 12:43:18.263454 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:43:18.263466 | orchestrator | 2025-04-05 12:43:18.263479 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-04-05 12:43:18.263491 | orchestrator | Saturday 05 April 2025 12:42:17 +0000 (0:00:02.028) 0:02:00.172 ******** 2025-04-05 12:43:18.263503 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:43:18.263516 | orchestrator | 2025-04-05 12:43:18.263528 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-04-05 12:43:18.263546 | orchestrator | Saturday 05 April 2025 12:42:19 +0000 (0:00:01.982) 0:02:02.155 ******** 2025-04-05 12:43:21.302739 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:43:21.302916 | orchestrator | 2025-04-05 12:43:21.302937 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-04-05 12:43:21.302954 | orchestrator | Saturday 05 April 2025 12:42:20 +0000 (0:00:01.938) 0:02:04.093 ******** 2025-04-05 12:43:21.302968 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:43:21.302982 | orchestrator | 2025-04-05 12:43:21.302996 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-04-05 12:43:21.303010 | orchestrator | Saturday 05 April 2025 12:42:42 +0000 (0:00:21.853) 0:02:25.947 ******** 2025-04-05 12:43:21.303024 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:43:21.303038 | orchestrator | 2025-04-05 12:43:21.303052 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-04-05 12:43:21.303066 | orchestrator | Saturday 05 April 2025 12:42:45 +0000 (0:00:02.273) 0:02:28.221 ******** 2025-04-05 12:43:21.303080 | orchestrator | 2025-04-05 12:43:21.303094 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-04-05 12:43:21.303107 | orchestrator | Saturday 05 April 2025 12:42:45 +0000 (0:00:00.051) 0:02:28.272 ******** 2025-04-05 12:43:21.303121 | orchestrator | 2025-04-05 12:43:21.303135 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-04-05 12:43:21.303149 | orchestrator | Saturday 05 April 2025 12:42:45 +0000 (0:00:00.049) 0:02:28.321 ******** 2025-04-05 12:43:21.303163 | orchestrator | 2025-04-05 12:43:21.303177 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-04-05 12:43:21.303191 | orchestrator | Saturday 05 April 2025 12:42:45 +0000 (0:00:00.053) 0:02:28.374 ******** 2025-04-05 12:43:21.303204 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:43:21.303218 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:43:21.303232 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:43:21.303246 | orchestrator | 2025-04-05 12:43:21.303262 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:43:21.303298 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-04-05 12:43:21.303316 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-04-05 12:43:21.303332 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-04-05 12:43:21.303379 | orchestrator | 2025-04-05 12:43:21.303395 | orchestrator | 2025-04-05 12:43:21.303411 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:43:21.303427 | orchestrator | Saturday 05 April 2025 12:43:15 +0000 (0:00:29.930) 0:02:58.305 ******** 2025-04-05 12:43:21.303442 | orchestrator | =============================================================================== 2025-04-05 12:43:21.303458 | orchestrator | glance : Restart glance-api container ---------------------------------- 29.93s 2025-04-05 12:43:21.303473 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 21.85s 2025-04-05 12:43:21.303489 | orchestrator | glance : Copying over glance-cache.conf for glance_api ----------------- 10.96s 2025-04-05 12:43:21.303504 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 10.40s 2025-04-05 12:43:21.303520 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 9.36s 2025-04-05 12:43:21.303536 | orchestrator | glance : Copying over config.json files for services -------------------- 9.04s 2025-04-05 12:43:21.303550 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 7.99s 2025-04-05 12:43:21.303564 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 6.06s 2025-04-05 12:43:21.303577 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.53s 2025-04-05 12:43:21.303597 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.29s 2025-04-05 12:43:21.303611 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.23s 2025-04-05 12:43:21.303625 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.73s 2025-04-05 12:43:21.303638 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.59s 2025-04-05 12:43:21.303652 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.18s 2025-04-05 12:43:21.303666 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.06s 2025-04-05 12:43:21.303680 | orchestrator | glance : Check glance containers ---------------------------------------- 4.03s 2025-04-05 12:43:21.303693 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.97s 2025-04-05 12:43:21.303707 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.91s 2025-04-05 12:43:21.303721 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.33s 2025-04-05 12:43:21.303735 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 2.86s 2025-04-05 12:43:21.303748 | orchestrator | 2025-04-05 12:43:18 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:43:21.303763 | orchestrator | 2025-04-05 12:43:18 | INFO  | Task 4518e32b-fbbb-4df2-a712-553ff00ec5f9 is in state STARTED 2025-04-05 12:43:21.303796 | orchestrator | 2025-04-05 12:43:18 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:43:21.303963 | orchestrator | 2025-04-05 12:43:21 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:43:21.306103 | orchestrator | 2025-04-05 12:43:21 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:43:21.306144 | orchestrator | 2025-04-05 12:43:21 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:43:21.307814 | orchestrator | 2025-04-05 12:43:21 | INFO  | Task 4518e32b-fbbb-4df2-a712-553ff00ec5f9 is in state STARTED 2025-04-05 12:43:24.350851 | orchestrator | 2025-04-05 12:43:21 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:43:24.350978 | orchestrator | 2025-04-05 12:43:24 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:43:24.352323 | orchestrator | 2025-04-05 12:43:24 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:43:24.355011 | orchestrator | 2025-04-05 12:43:24 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:43:24.357090 | orchestrator | 2025-04-05 12:43:24 | INFO  | Task 4518e32b-fbbb-4df2-a712-553ff00ec5f9 is in state STARTED 2025-04-05 12:43:27.394126 | orchestrator | 2025-04-05 12:43:24 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:43:27.394250 | orchestrator | 2025-04-05 12:43:27 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:43:27.395010 | orchestrator | 2025-04-05 12:43:27 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:43:27.396480 | orchestrator | 2025-04-05 12:43:27 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:43:27.397921 | orchestrator | 2025-04-05 12:43:27 | INFO  | Task 4518e32b-fbbb-4df2-a712-553ff00ec5f9 is in state STARTED 2025-04-05 12:43:30.433730 | orchestrator | 2025-04-05 12:43:27 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:43:30.433909 | orchestrator | 2025-04-05 12:43:30 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:43:30.435901 | orchestrator | 2025-04-05 12:43:30 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:43:30.436821 | orchestrator | 2025-04-05 12:43:30 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:43:30.436855 | orchestrator | 2025-04-05 12:43:30 | INFO  | Task 4518e32b-fbbb-4df2-a712-553ff00ec5f9 is in state STARTED 2025-04-05 12:43:33.484677 | orchestrator | 2025-04-05 12:43:30 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:43:33.484835 | orchestrator | 2025-04-05 12:43:33 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:43:33.485964 | orchestrator | 2025-04-05 12:43:33 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:43:33.488654 | orchestrator | 2025-04-05 12:43:33 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:43:33.490666 | orchestrator | 2025-04-05 12:43:33 | INFO  | Task 4518e32b-fbbb-4df2-a712-553ff00ec5f9 is in state STARTED 2025-04-05 12:43:33.490836 | orchestrator | 2025-04-05 12:43:33 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:43:36.530332 | orchestrator | 2025-04-05 12:43:36 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:43:36.533115 | orchestrator | 2025-04-05 12:43:36 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:43:36.535407 | orchestrator | 2025-04-05 12:43:36 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:43:36.537932 | orchestrator | 2025-04-05 12:43:36 | INFO  | Task 4518e32b-fbbb-4df2-a712-553ff00ec5f9 is in state STARTED 2025-04-05 12:43:39.575390 | orchestrator | 2025-04-05 12:43:36 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:43:39.575610 | orchestrator | 2025-04-05 12:43:39 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:43:39.576549 | orchestrator | 2025-04-05 12:43:39 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:43:39.576582 | orchestrator | 2025-04-05 12:43:39 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:43:39.578093 | orchestrator | 2025-04-05 12:43:39 | INFO  | Task 4518e32b-fbbb-4df2-a712-553ff00ec5f9 is in state STARTED 2025-04-05 12:43:42.630837 | orchestrator | 2025-04-05 12:43:39 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:43:42.630986 | orchestrator | 2025-04-05 12:43:42 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:43:42.634861 | orchestrator | 2025-04-05 12:43:42 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:43:42.638553 | orchestrator | 2025-04-05 12:43:42 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:43:42.639624 | orchestrator | 2025-04-05 12:43:42 | INFO  | Task 4518e32b-fbbb-4df2-a712-553ff00ec5f9 is in state STARTED 2025-04-05 12:43:42.639852 | orchestrator | 2025-04-05 12:43:42 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:43:45.685171 | orchestrator | 2025-04-05 12:43:45 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:43:45.687139 | orchestrator | 2025-04-05 12:43:45 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:43:45.689725 | orchestrator | 2025-04-05 12:43:45 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:43:45.691452 | orchestrator | 2025-04-05 12:43:45 | INFO  | Task 4518e32b-fbbb-4df2-a712-553ff00ec5f9 is in state STARTED 2025-04-05 12:43:45.691974 | orchestrator | 2025-04-05 12:43:45 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:43:48.736850 | orchestrator | 2025-04-05 12:43:48 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:43:48.737752 | orchestrator | 2025-04-05 12:43:48 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:43:48.737822 | orchestrator | 2025-04-05 12:43:48 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:43:48.739180 | orchestrator | 2025-04-05 12:43:48 | INFO  | Task 4518e32b-fbbb-4df2-a712-553ff00ec5f9 is in state STARTED 2025-04-05 12:43:51.781715 | orchestrator | 2025-04-05 12:43:48 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:43:51.781884 | orchestrator | 2025-04-05 12:43:51 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:43:51.783165 | orchestrator | 2025-04-05 12:43:51 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:43:51.785618 | orchestrator | 2025-04-05 12:43:51 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:43:51.786788 | orchestrator | 2025-04-05 12:43:51 | INFO  | Task 4518e32b-fbbb-4df2-a712-553ff00ec5f9 is in state SUCCESS 2025-04-05 12:43:51.786957 | orchestrator | 2025-04-05 12:43:51 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:43:51.788177 | orchestrator | 2025-04-05 12:43:51.788261 | orchestrator | 2025-04-05 12:43:51.788279 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:43:51.788322 | orchestrator | 2025-04-05 12:43:51.788337 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:43:51.788351 | orchestrator | Saturday 05 April 2025 12:43:01 +0000 (0:00:00.271) 0:00:00.271 ******** 2025-04-05 12:43:51.788366 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:43:51.788381 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:43:51.788395 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:43:51.788409 | orchestrator | 2025-04-05 12:43:51.788423 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:43:51.788438 | orchestrator | Saturday 05 April 2025 12:43:02 +0000 (0:00:00.305) 0:00:00.576 ******** 2025-04-05 12:43:51.788453 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-04-05 12:43:51.788467 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-04-05 12:43:51.788481 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-04-05 12:43:51.788495 | orchestrator | 2025-04-05 12:43:51.788509 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-04-05 12:43:51.788523 | orchestrator | 2025-04-05 12:43:51.788557 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-04-05 12:43:51.788572 | orchestrator | Saturday 05 April 2025 12:43:02 +0000 (0:00:00.469) 0:00:01.045 ******** 2025-04-05 12:43:51.788587 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:43:51.788603 | orchestrator | 2025-04-05 12:43:51.788617 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-04-05 12:43:51.788631 | orchestrator | Saturday 05 April 2025 12:43:03 +0000 (0:00:00.560) 0:00:01.606 ******** 2025-04-05 12:43:51.788646 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-04-05 12:43:51.788660 | orchestrator | 2025-04-05 12:43:51.788674 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-04-05 12:43:51.788688 | orchestrator | Saturday 05 April 2025 12:43:06 +0000 (0:00:03.005) 0:00:04.611 ******** 2025-04-05 12:43:51.788702 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-04-05 12:43:51.788717 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-04-05 12:43:51.788732 | orchestrator | 2025-04-05 12:43:51.788748 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-04-05 12:43:51.788764 | orchestrator | Saturday 05 April 2025 12:43:12 +0000 (0:00:05.971) 0:00:10.582 ******** 2025-04-05 12:43:51.788779 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-05 12:43:51.788819 | orchestrator | 2025-04-05 12:43:51.788837 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-04-05 12:43:51.788852 | orchestrator | Saturday 05 April 2025 12:43:15 +0000 (0:00:03.159) 0:00:13.742 ******** 2025-04-05 12:43:51.788868 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-05 12:43:51.788883 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-04-05 12:43:51.788899 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-04-05 12:43:51.788915 | orchestrator | 2025-04-05 12:43:51.788930 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-04-05 12:43:51.788946 | orchestrator | Saturday 05 April 2025 12:43:22 +0000 (0:00:06.907) 0:00:20.650 ******** 2025-04-05 12:43:51.788962 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-05 12:43:51.788978 | orchestrator | 2025-04-05 12:43:51.788994 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-04-05 12:43:51.789010 | orchestrator | Saturday 05 April 2025 12:43:24 +0000 (0:00:02.806) 0:00:23.456 ******** 2025-04-05 12:43:51.789026 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-04-05 12:43:51.789042 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-04-05 12:43:51.789057 | orchestrator | 2025-04-05 12:43:51.789073 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-04-05 12:43:51.789093 | orchestrator | Saturday 05 April 2025 12:43:31 +0000 (0:00:06.198) 0:00:29.655 ******** 2025-04-05 12:43:51.789108 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-04-05 12:43:51.789122 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-04-05 12:43:51.789136 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-04-05 12:43:51.789150 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-04-05 12:43:51.789164 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-04-05 12:43:51.789178 | orchestrator | 2025-04-05 12:43:51.789192 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-04-05 12:43:51.789206 | orchestrator | Saturday 05 April 2025 12:43:45 +0000 (0:00:14.316) 0:00:43.972 ******** 2025-04-05 12:43:51.789221 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:43:51.789235 | orchestrator | 2025-04-05 12:43:51.789249 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-04-05 12:43:51.789270 | orchestrator | Saturday 05 April 2025 12:43:46 +0000 (0:00:00.573) 0:00:44.545 ******** 2025-04-05 12:43:51.789286 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found 2025-04-05 12:43:51.789323 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1743857027.3582442-6647-157496412945785/AnsiballZ_compute_flavor.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1743857027.3582442-6647-157496412945785/AnsiballZ_compute_flavor.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1743857027.3582442-6647-157496412945785/AnsiballZ_compute_flavor.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_os_nova_flavor_payload_cbpqd3ah/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 367, in \n File \"/tmp/ansible_os_nova_flavor_payload_cbpqd3ah/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 363, in main\n File \"/tmp/ansible_os_nova_flavor_payload_cbpqd3ah/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_nova_flavor_payload_cbpqd3ah/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 220, in run\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 89, in __get__\n proxy = self._make_proxy(instance)\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 287, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 272, in get_endpoint_data\n endpoint_data = service_catalog.endpoint_data_for(\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/access/service_catalog.py\", line 459, in endpoint_data_for\n raise exceptions.EndpointNotFound(msg)\nkeystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-04-05 12:43:51.789343 | orchestrator | 2025-04-05 12:43:51.789357 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:43:51.789372 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-04-05 12:43:51.789395 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:43:51.789410 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:43:51.789424 | orchestrator | 2025-04-05 12:43:51.789438 | orchestrator | 2025-04-05 12:43:51.789452 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:43:51.789466 | orchestrator | Saturday 05 April 2025 12:43:48 +0000 (0:00:02.979) 0:00:47.525 ******** 2025-04-05 12:43:51.789480 | orchestrator | =============================================================================== 2025-04-05 12:43:51.789494 | orchestrator | octavia : Adding octavia related roles --------------------------------- 14.32s 2025-04-05 12:43:51.789520 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 6.91s 2025-04-05 12:43:54.824970 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.20s 2025-04-05 12:43:54.825115 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 5.97s 2025-04-05 12:43:54.825134 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.16s 2025-04-05 12:43:54.825149 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.01s 2025-04-05 12:43:54.825164 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 2.98s 2025-04-05 12:43:54.825178 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 2.81s 2025-04-05 12:43:54.825193 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.57s 2025-04-05 12:43:54.825208 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.56s 2025-04-05 12:43:54.825223 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2025-04-05 12:43:54.825237 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-04-05 12:43:54.825271 | orchestrator | 2025-04-05 12:43:54 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:43:54.826340 | orchestrator | 2025-04-05 12:43:54 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:43:54.828043 | orchestrator | 2025-04-05 12:43:54 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:43:57.869573 | orchestrator | 2025-04-05 12:43:54 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:43:57.869671 | orchestrator | 2025-04-05 12:43:57 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:43:57.872718 | orchestrator | 2025-04-05 12:43:57 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:43:57.874326 | orchestrator | 2025-04-05 12:43:57 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:00.919621 | orchestrator | 2025-04-05 12:43:57 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:00.919758 | orchestrator | 2025-04-05 12:44:00 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:00.921528 | orchestrator | 2025-04-05 12:44:00 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:00.923963 | orchestrator | 2025-04-05 12:44:00 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:03.972790 | orchestrator | 2025-04-05 12:44:00 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:03.972961 | orchestrator | 2025-04-05 12:44:03 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:03.977393 | orchestrator | 2025-04-05 12:44:03 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:03.978553 | orchestrator | 2025-04-05 12:44:03 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:03.978829 | orchestrator | 2025-04-05 12:44:03 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:07.022690 | orchestrator | 2025-04-05 12:44:07 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:07.024484 | orchestrator | 2025-04-05 12:44:07 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:07.026872 | orchestrator | 2025-04-05 12:44:07 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:07.027725 | orchestrator | 2025-04-05 12:44:07 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:10.078248 | orchestrator | 2025-04-05 12:44:10 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:10.084240 | orchestrator | 2025-04-05 12:44:10 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:10.085635 | orchestrator | 2025-04-05 12:44:10 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:13.131001 | orchestrator | 2025-04-05 12:44:10 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:13.131129 | orchestrator | 2025-04-05 12:44:13 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:13.132537 | orchestrator | 2025-04-05 12:44:13 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:13.134552 | orchestrator | 2025-04-05 12:44:13 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:16.196644 | orchestrator | 2025-04-05 12:44:13 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:16.196777 | orchestrator | 2025-04-05 12:44:16 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:16.197711 | orchestrator | 2025-04-05 12:44:16 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:16.201416 | orchestrator | 2025-04-05 12:44:16 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:19.250373 | orchestrator | 2025-04-05 12:44:16 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:19.250506 | orchestrator | 2025-04-05 12:44:19 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:19.251523 | orchestrator | 2025-04-05 12:44:19 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:19.254147 | orchestrator | 2025-04-05 12:44:19 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:22.303900 | orchestrator | 2025-04-05 12:44:19 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:22.304039 | orchestrator | 2025-04-05 12:44:22 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:22.306514 | orchestrator | 2025-04-05 12:44:22 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:22.308333 | orchestrator | 2025-04-05 12:44:22 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:25.362493 | orchestrator | 2025-04-05 12:44:22 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:25.362624 | orchestrator | 2025-04-05 12:44:25 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:25.364559 | orchestrator | 2025-04-05 12:44:25 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:25.366875 | orchestrator | 2025-04-05 12:44:25 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:28.416045 | orchestrator | 2025-04-05 12:44:25 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:28.416181 | orchestrator | 2025-04-05 12:44:28 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:28.419212 | orchestrator | 2025-04-05 12:44:28 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:28.420945 | orchestrator | 2025-04-05 12:44:28 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:31.454095 | orchestrator | 2025-04-05 12:44:28 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:31.454229 | orchestrator | 2025-04-05 12:44:31 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:31.454668 | orchestrator | 2025-04-05 12:44:31 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:31.455398 | orchestrator | 2025-04-05 12:44:31 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:34.505384 | orchestrator | 2025-04-05 12:44:31 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:34.505525 | orchestrator | 2025-04-05 12:44:34 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:34.506480 | orchestrator | 2025-04-05 12:44:34 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:34.508021 | orchestrator | 2025-04-05 12:44:34 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:37.546671 | orchestrator | 2025-04-05 12:44:34 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:37.546858 | orchestrator | 2025-04-05 12:44:37 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:37.548054 | orchestrator | 2025-04-05 12:44:37 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:37.548100 | orchestrator | 2025-04-05 12:44:37 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:40.603594 | orchestrator | 2025-04-05 12:44:37 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:40.603724 | orchestrator | 2025-04-05 12:44:40 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:40.604931 | orchestrator | 2025-04-05 12:44:40 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:40.606477 | orchestrator | 2025-04-05 12:44:40 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:43.680124 | orchestrator | 2025-04-05 12:44:40 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:43.680243 | orchestrator | 2025-04-05 12:44:43 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:43.683854 | orchestrator | 2025-04-05 12:44:43 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:46.739106 | orchestrator | 2025-04-05 12:44:43 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:46.739225 | orchestrator | 2025-04-05 12:44:43 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:46.739261 | orchestrator | 2025-04-05 12:44:46 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:46.739666 | orchestrator | 2025-04-05 12:44:46 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:46.741728 | orchestrator | 2025-04-05 12:44:46 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:49.793312 | orchestrator | 2025-04-05 12:44:46 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:49.793473 | orchestrator | 2025-04-05 12:44:49 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:49.796071 | orchestrator | 2025-04-05 12:44:49 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:49.798186 | orchestrator | 2025-04-05 12:44:49 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:52.846233 | orchestrator | 2025-04-05 12:44:49 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:52.846345 | orchestrator | 2025-04-05 12:44:52 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:52.847566 | orchestrator | 2025-04-05 12:44:52 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:52.850210 | orchestrator | 2025-04-05 12:44:52 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:55.900657 | orchestrator | 2025-04-05 12:44:52 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:55.900781 | orchestrator | 2025-04-05 12:44:55 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:55.902617 | orchestrator | 2025-04-05 12:44:55 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:55.904786 | orchestrator | 2025-04-05 12:44:55 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:44:58.961008 | orchestrator | 2025-04-05 12:44:55 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:44:58.961136 | orchestrator | 2025-04-05 12:44:58 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:44:58.961790 | orchestrator | 2025-04-05 12:44:58 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:44:58.963294 | orchestrator | 2025-04-05 12:44:58 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:02.017270 | orchestrator | 2025-04-05 12:44:58 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:02.017392 | orchestrator | 2025-04-05 12:45:02 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:45:02.019112 | orchestrator | 2025-04-05 12:45:02 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:45:02.020984 | orchestrator | 2025-04-05 12:45:02 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:05.074927 | orchestrator | 2025-04-05 12:45:02 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:05.075070 | orchestrator | 2025-04-05 12:45:05 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:45:05.076718 | orchestrator | 2025-04-05 12:45:05 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:45:05.077891 | orchestrator | 2025-04-05 12:45:05 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:05.078081 | orchestrator | 2025-04-05 12:45:05 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:08.124367 | orchestrator | 2025-04-05 12:45:08 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state STARTED 2025-04-05 12:45:08.126146 | orchestrator | 2025-04-05 12:45:08 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state STARTED 2025-04-05 12:45:08.128610 | orchestrator | 2025-04-05 12:45:08 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:11.189926 | orchestrator | 2025-04-05 12:45:08 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:11.190588 | orchestrator | 2025-04-05 12:45:11.190684 | orchestrator | 2025-04-05 12:45:11.190700 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:45:11.191127 | orchestrator | 2025-04-05 12:45:11.191155 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:45:11.191181 | orchestrator | Saturday 05 April 2025 12:43:18 +0000 (0:00:00.222) 0:00:00.222 ******** 2025-04-05 12:45:11.191205 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:45:11.191231 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:45:11.191256 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:45:11.191282 | orchestrator | 2025-04-05 12:45:11.191307 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:45:11.191332 | orchestrator | Saturday 05 April 2025 12:43:18 +0000 (0:00:00.333) 0:00:00.555 ******** 2025-04-05 12:45:11.191357 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-04-05 12:45:11.191383 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-04-05 12:45:11.191406 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-04-05 12:45:11.191433 | orchestrator | 2025-04-05 12:45:11.191457 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-04-05 12:45:11.191481 | orchestrator | 2025-04-05 12:45:11.191506 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-04-05 12:45:11.191531 | orchestrator | Saturday 05 April 2025 12:43:19 +0000 (0:00:00.408) 0:00:00.964 ******** 2025-04-05 12:45:11.191554 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:45:11.191570 | orchestrator | 2025-04-05 12:45:11.191584 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-04-05 12:45:11.191598 | orchestrator | Saturday 05 April 2025 12:43:19 +0000 (0:00:00.546) 0:00:01.510 ******** 2025-04-05 12:45:11.191615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-05 12:45:11.191637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-05 12:45:11.191653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-05 12:45:11.191667 | orchestrator | 2025-04-05 12:45:11.191681 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-04-05 12:45:11.191726 | orchestrator | Saturday 05 April 2025 12:43:20 +0000 (0:00:00.888) 0:00:02.399 ******** 2025-04-05 12:45:11.191747 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-04-05 12:45:11.191764 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-04-05 12:45:11.191786 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-05 12:45:11.191820 | orchestrator | 2025-04-05 12:45:11.191891 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-04-05 12:45:11.191917 | orchestrator | Saturday 05 April 2025 12:43:21 +0000 (0:00:00.502) 0:00:02.901 ******** 2025-04-05 12:45:11.191942 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:45:11.191967 | orchestrator | 2025-04-05 12:45:11.191993 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-04-05 12:45:11.192019 | orchestrator | Saturday 05 April 2025 12:43:21 +0000 (0:00:00.488) 0:00:03.390 ******** 2025-04-05 12:45:11.192216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-05 12:45:11.192248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-05 12:45:11.192265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-05 12:45:11.192279 | orchestrator | 2025-04-05 12:45:11.192294 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-04-05 12:45:11.192308 | orchestrator | Saturday 05 April 2025 12:43:22 +0000 (0:00:01.304) 0:00:04.694 ******** 2025-04-05 12:45:11.192323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-05 12:45:11.192352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-05 12:45:11.192367 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:45:11.192383 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:45:11.192438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-05 12:45:11.192455 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:45:11.192470 | orchestrator | 2025-04-05 12:45:11.192484 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-04-05 12:45:11.192509 | orchestrator | Saturday 05 April 2025 12:43:23 +0000 (0:00:00.405) 0:00:05.099 ******** 2025-04-05 12:45:11.192533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-05 12:45:11.192557 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:45:11.192582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-05 12:45:11.192606 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:45:11.192685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-05 12:45:11.192767 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:45:11.192798 | orchestrator | 2025-04-05 12:45:11.192983 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-04-05 12:45:11.193074 | orchestrator | Saturday 05 April 2025 12:43:23 +0000 (0:00:00.529) 0:00:05.629 ******** 2025-04-05 12:45:11.193095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-05 12:45:11.193155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-05 12:45:11.193234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-05 12:45:11.193266 | orchestrator | 2025-04-05 12:45:11.193287 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-04-05 12:45:11.193309 | orchestrator | Saturday 05 April 2025 12:43:24 +0000 (0:00:01.156) 0:00:06.786 ******** 2025-04-05 12:45:11.193333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-05 12:45:11.193360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-05 12:45:11.193381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-05 12:45:11.193416 | orchestrator | 2025-04-05 12:45:11.193437 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-04-05 12:45:11.193457 | orchestrator | Saturday 05 April 2025 12:43:26 +0000 (0:00:01.272) 0:00:08.058 ******** 2025-04-05 12:45:11.193478 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:45:11.193492 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:45:11.193504 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:45:11.193517 | orchestrator | 2025-04-05 12:45:11.193529 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-04-05 12:45:11.193541 | orchestrator | Saturday 05 April 2025 12:43:26 +0000 (0:00:00.242) 0:00:08.301 ******** 2025-04-05 12:45:11.193554 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-04-05 12:45:11.193576 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-04-05 12:45:11.193588 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-04-05 12:45:11.193600 | orchestrator | 2025-04-05 12:45:11.193612 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-04-05 12:45:11.193625 | orchestrator | Saturday 05 April 2025 12:43:27 +0000 (0:00:01.126) 0:00:09.427 ******** 2025-04-05 12:45:11.193637 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-04-05 12:45:11.193650 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-04-05 12:45:11.193662 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-04-05 12:45:11.193683 | orchestrator | 2025-04-05 12:45:11.193703 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-04-05 12:45:11.193723 | orchestrator | Saturday 05 April 2025 12:43:28 +0000 (0:00:01.078) 0:00:10.506 ******** 2025-04-05 12:45:11.193793 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-05 12:45:11.193819 | orchestrator | 2025-04-05 12:45:11.193869 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-04-05 12:45:11.193884 | orchestrator | Saturday 05 April 2025 12:43:29 +0000 (0:00:00.404) 0:00:10.911 ******** 2025-04-05 12:45:11.193896 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-04-05 12:45:11.193909 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-04-05 12:45:11.193921 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:45:11.193935 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:45:11.193947 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:45:11.193960 | orchestrator | 2025-04-05 12:45:11.193972 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-04-05 12:45:11.193985 | orchestrator | Saturday 05 April 2025 12:43:29 +0000 (0:00:00.769) 0:00:11.680 ******** 2025-04-05 12:45:11.193997 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:45:11.194009 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:45:11.194058 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:45:11.194071 | orchestrator | 2025-04-05 12:45:11.194083 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-04-05 12:45:11.194096 | orchestrator | Saturday 05 April 2025 12:43:30 +0000 (0:00:00.392) 0:00:12.073 ******** 2025-04-05 12:45:11.194119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1330498, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1225135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1330498, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1225135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1330498, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1225135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1330483, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1175137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1330483, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1175137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1330483, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1175137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1330475, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1155136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1330475, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1155136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1330475, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1155136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1330490, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1185136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1330490, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1185136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1330490, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1185136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1330458, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1125135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1330458, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1125135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1330458, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1125135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1330478, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1155136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1330478, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1155136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1330478, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1155136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1330488, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1185136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1330488, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1185136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1330488, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1185136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1330455, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1115134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1330455, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1115134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1330455, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1115134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194658 | orchestrator | changed: [testbed-node-0] => (item={'key': 2025-04-05 12:45:11 | INFO  | Task dbdc7c73-e396-4948-a358-1e2f043a4ca9 is in state SUCCESS 2025-04-05 12:45:11.194673 | orchestrator | 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1330438, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1075134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1330438, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1075134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1330438, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1075134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1330461, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1135135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1330461, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1135135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1330461, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1135135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1330449, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1095135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1330449, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1095135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1330449, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1095135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1330486, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1185136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1330486, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1185136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1330486, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1185136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1330468, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1145136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.194997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1330468, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1145136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1330468, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1145136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1330495, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1195135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1330495, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1195135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1330495, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1195135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1330452, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1105134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1330452, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1105134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1330452, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1105134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1330480, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1165135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1330480, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1165135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1330480, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1165135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1330441, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1095135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1330441, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1095135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1330441, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1095135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1330450, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1105134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1330450, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1105134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1330450, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1105134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1330470, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1145136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1330470, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1145136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1330470, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1145136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1330556, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.145514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1330556, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.145514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1330556, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.145514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1330542, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1355138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1330542, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1355138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1330542, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1355138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1330508, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1235137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1330508, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1235137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1330508, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1235137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1330595, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.149514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1330595, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.149514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1330595, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.149514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1330511, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1235137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1330511, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1235137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1330511, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1235137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1330591, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.148514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1330591, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.148514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1330591, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.148514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1330597, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.151514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1330597, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.151514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1330597, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.151514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1330583, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.146514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1330583, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.146514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1330583, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.146514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1330590, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1475139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1330590, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1475139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1330590, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1475139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1330515, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1245136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1330515, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1245136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1330515, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1245136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1330547, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1355138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1330547, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1355138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1330547, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1355138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1330603, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.152514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1330603, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.152514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1330603, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.152514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1330592, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.148514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1330592, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.148514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1330592, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.148514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1330520, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1275136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1330520, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1275136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1330520, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1275136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1330517, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1255138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1330517, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1255138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1330517, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1255138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1330525, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1285138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1330525, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1285138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1330525, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1285138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1330529, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1335137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1330529, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1335137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1330529, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1335137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1330549, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1375139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1330549, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1375139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1330549, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1375139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1330588, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1475139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1330588, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1475139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1330588, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1475139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.195986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1330553, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1375139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.196004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1330553, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1375139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.196015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1330553, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.1375139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.196029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1330610, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.154514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.196039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1330610, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.154514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.196050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1330610, 'dev': 208, 'nlink': 1, 'atime': 1743811367.0, 'mtime': 1743811367.0, 'ctime': 1743853918.154514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-05 12:45:11.196060 | orchestrator | 2025-04-05 12:45:11.196070 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-04-05 12:45:11.196086 | orchestrator | Saturday 05 April 2025 12:44:03 +0000 (0:00:32.939) 0:00:45.013 ******** 2025-04-05 12:45:11.196097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-05 12:45:11.196107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-05 12:45:11.196118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-05 12:45:11.196128 | orchestrator | 2025-04-05 12:45:11.196138 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-04-05 12:45:11.196149 | orchestrator | Saturday 05 April 2025 12:44:04 +0000 (0:00:00.909) 0:00:45.923 ******** 2025-04-05 12:45:11.196159 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:45:11.196169 | orchestrator | 2025-04-05 12:45:11.196179 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-04-05 12:45:11.196189 | orchestrator | Saturday 05 April 2025 12:44:06 +0000 (0:00:02.212) 0:00:48.135 ******** 2025-04-05 12:45:11.196199 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:45:11.196208 | orchestrator | 2025-04-05 12:45:11.196218 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-04-05 12:45:11.196232 | orchestrator | Saturday 05 April 2025 12:44:08 +0000 (0:00:01.948) 0:00:50.083 ******** 2025-04-05 12:45:11.196243 | orchestrator | 2025-04-05 12:45:11.196252 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-04-05 12:45:11.196262 | orchestrator | Saturday 05 April 2025 12:44:08 +0000 (0:00:00.053) 0:00:50.137 ******** 2025-04-05 12:45:11.196272 | orchestrator | 2025-04-05 12:45:11.196282 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-04-05 12:45:11.196292 | orchestrator | Saturday 05 April 2025 12:44:08 +0000 (0:00:00.049) 0:00:50.187 ******** 2025-04-05 12:45:11.196302 | orchestrator | 2025-04-05 12:45:11.196312 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-04-05 12:45:11.196321 | orchestrator | Saturday 05 April 2025 12:44:08 +0000 (0:00:00.170) 0:00:50.357 ******** 2025-04-05 12:45:11.196331 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:45:11.196341 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:45:11.196351 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:45:11.196366 | orchestrator | 2025-04-05 12:45:11.196376 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-04-05 12:45:11.196386 | orchestrator | Saturday 05 April 2025 12:44:14 +0000 (0:00:06.470) 0:00:56.828 ******** 2025-04-05 12:45:11.196396 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:45:11.196406 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:45:11.196416 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-04-05 12:45:11.196426 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-04-05 12:45:11.196436 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:45:11.196446 | orchestrator | 2025-04-05 12:45:11.196456 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-04-05 12:45:11.196466 | orchestrator | Saturday 05 April 2025 12:44:40 +0000 (0:00:25.821) 0:01:22.650 ******** 2025-04-05 12:45:11.196476 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:45:11.196486 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:45:11.196496 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:45:11.196506 | orchestrator | 2025-04-05 12:45:11.196516 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-04-05 12:45:11.196526 | orchestrator | Saturday 05 April 2025 12:45:04 +0000 (0:00:23.234) 0:01:45.884 ******** 2025-04-05 12:45:11.196536 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:45:11.196546 | orchestrator | 2025-04-05 12:45:11.196556 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-04-05 12:45:11.196566 | orchestrator | Saturday 05 April 2025 12:45:05 +0000 (0:00:01.900) 0:01:47.784 ******** 2025-04-05 12:45:11.196576 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:45:11.196585 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:45:11.196595 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:45:11.196605 | orchestrator | 2025-04-05 12:45:11.196615 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-04-05 12:45:11.196625 | orchestrator | Saturday 05 April 2025 12:45:06 +0000 (0:00:00.461) 0:01:48.246 ******** 2025-04-05 12:45:11.196636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-04-05 12:45:11.196647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-04-05 12:45:11.196658 | orchestrator | 2025-04-05 12:45:11.196668 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-04-05 12:45:11.196678 | orchestrator | Saturday 05 April 2025 12:45:08 +0000 (0:00:02.040) 0:01:50.286 ******** 2025-04-05 12:45:11.196688 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:45:11.196698 | orchestrator | 2025-04-05 12:45:11.196708 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:45:11.196718 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-05 12:45:11.196729 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-05 12:45:11.196739 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-05 12:45:11.196749 | orchestrator | 2025-04-05 12:45:11.196759 | orchestrator | 2025-04-05 12:45:11.196769 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:45:11.196784 | orchestrator | Saturday 05 April 2025 12:45:08 +0000 (0:00:00.452) 0:01:50.739 ******** 2025-04-05 12:45:11.196794 | orchestrator | =============================================================================== 2025-04-05 12:45:11.196804 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 32.94s 2025-04-05 12:45:11.196814 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 25.82s 2025-04-05 12:45:11.196824 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 23.23s 2025-04-05 12:45:11.196851 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.47s 2025-04-05 12:45:11.196866 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.21s 2025-04-05 12:45:11.196881 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.04s 2025-04-05 12:45:14.229033 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 1.95s 2025-04-05 12:45:14.229141 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 1.90s 2025-04-05 12:45:14.229156 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.30s 2025-04-05 12:45:14.229169 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.27s 2025-04-05 12:45:14.229181 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.16s 2025-04-05 12:45:14.229194 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.13s 2025-04-05 12:45:14.229207 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.08s 2025-04-05 12:45:14.229220 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.91s 2025-04-05 12:45:14.229233 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.89s 2025-04-05 12:45:14.229245 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.77s 2025-04-05 12:45:14.229257 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.55s 2025-04-05 12:45:14.229270 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.53s 2025-04-05 12:45:14.229282 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.50s 2025-04-05 12:45:14.229295 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.49s 2025-04-05 12:45:14.229308 | orchestrator | 2025-04-05 12:45:11 | INFO  | Task d68f0d10-0c2e-4f12-a071-bb52002bc81f is in state SUCCESS 2025-04-05 12:45:14.229321 | orchestrator | 2025-04-05 12:45:11 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:14.229333 | orchestrator | 2025-04-05 12:45:11 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:14.229362 | orchestrator | 2025-04-05 12:45:14 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:17.288272 | orchestrator | 2025-04-05 12:45:14 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:17.288406 | orchestrator | 2025-04-05 12:45:17 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:20.347136 | orchestrator | 2025-04-05 12:45:17 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:20.347279 | orchestrator | 2025-04-05 12:45:20 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:20.348893 | orchestrator | 2025-04-05 12:45:20 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:23.392276 | orchestrator | 2025-04-05 12:45:23 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:26.432125 | orchestrator | 2025-04-05 12:45:23 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:26.432266 | orchestrator | 2025-04-05 12:45:26 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:29.476678 | orchestrator | 2025-04-05 12:45:26 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:29.476885 | orchestrator | 2025-04-05 12:45:29 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:29.477339 | orchestrator | 2025-04-05 12:45:29 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:32.528035 | orchestrator | 2025-04-05 12:45:32 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:35.587215 | orchestrator | 2025-04-05 12:45:32 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:35.587334 | orchestrator | 2025-04-05 12:45:35 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:38.630398 | orchestrator | 2025-04-05 12:45:35 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:38.630522 | orchestrator | 2025-04-05 12:45:38 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:41.674282 | orchestrator | 2025-04-05 12:45:38 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:41.674415 | orchestrator | 2025-04-05 12:45:41 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:44.723973 | orchestrator | 2025-04-05 12:45:41 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:44.724100 | orchestrator | 2025-04-05 12:45:44 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:47.767819 | orchestrator | 2025-04-05 12:45:44 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:47.767983 | orchestrator | 2025-04-05 12:45:47 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:50.825028 | orchestrator | 2025-04-05 12:45:47 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:50.825174 | orchestrator | 2025-04-05 12:45:50 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:53.873772 | orchestrator | 2025-04-05 12:45:50 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:53.873920 | orchestrator | 2025-04-05 12:45:53 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:56.923075 | orchestrator | 2025-04-05 12:45:53 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:56.923197 | orchestrator | 2025-04-05 12:45:56 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:45:59.969918 | orchestrator | 2025-04-05 12:45:56 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:45:59.970073 | orchestrator | 2025-04-05 12:45:59 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:46:02.999765 | orchestrator | 2025-04-05 12:45:59 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:46:02.999915 | orchestrator | 2025-04-05 12:46:02 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:46:06.059738 | orchestrator | 2025-04-05 12:46:02 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:46:06.059893 | orchestrator | 2025-04-05 12:46:06 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:46:09.095490 | orchestrator | 2025-04-05 12:46:06 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:46:09.095616 | orchestrator | 2025-04-05 12:46:09 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:46:12.138810 | orchestrator | 2025-04-05 12:46:09 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:46:12.138994 | orchestrator | 2025-04-05 12:46:12 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:46:15.177253 | orchestrator | 2025-04-05 12:46:12 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:46:15.177367 | orchestrator | 2025-04-05 12:46:15 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:46:15.177984 | orchestrator | 2025-04-05 12:46:15 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:46:18.223243 | orchestrator | 2025-04-05 12:46:18 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:46:21.264879 | orchestrator | 2025-04-05 12:46:18 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:46:21.264993 | orchestrator | 2025-04-05 12:46:21 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:46:24.304789 | orchestrator | 2025-04-05 12:46:21 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:46:24.304968 | orchestrator | 2025-04-05 12:46:24 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:46:27.351614 | orchestrator | 2025-04-05 12:46:24 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:46:27.351735 | orchestrator | 2025-04-05 12:46:27 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:46:30.392314 | orchestrator | 2025-04-05 12:46:27 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:46:30.392460 | orchestrator | 2025-04-05 12:46:30 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:46:33.440432 | orchestrator | 2025-04-05 12:46:30 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:46:33.440553 | orchestrator | 2025-04-05 12:46:33 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:46:36.500367 | orchestrator | 2025-04-05 12:46:33 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:46:36.500491 | orchestrator | 2025-04-05 12:46:36 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:46:39.542452 | orchestrator | 2025-04-05 12:46:36 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:46:39.542587 | orchestrator | 2025-04-05 12:46:39 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:46:42.590779 | orchestrator | 2025-04-05 12:46:39 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:46:42.590965 | orchestrator | 2025-04-05 12:46:42 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:46:45.645661 | orchestrator | 2025-04-05 12:46:42 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:46:45.645781 | orchestrator | 2025-04-05 12:46:45 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:46:48.695733 | orchestrator | 2025-04-05 12:46:45 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:46:48.695896 | orchestrator | 2025-04-05 12:46:48 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:46:51.745637 | orchestrator | 2025-04-05 12:46:48 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:46:51.745765 | orchestrator | 2025-04-05 12:46:51 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:46:54.794687 | orchestrator | 2025-04-05 12:46:51 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:46:54.794933 | orchestrator | 2025-04-05 12:46:54 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:46:57.840539 | orchestrator | 2025-04-05 12:46:54 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:46:57.840661 | orchestrator | 2025-04-05 12:46:57 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:00.885584 | orchestrator | 2025-04-05 12:46:57 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:00.885724 | orchestrator | 2025-04-05 12:47:00 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:03.929025 | orchestrator | 2025-04-05 12:47:00 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:03.929148 | orchestrator | 2025-04-05 12:47:03 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:06.971547 | orchestrator | 2025-04-05 12:47:03 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:06.971668 | orchestrator | 2025-04-05 12:47:06 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:10.029417 | orchestrator | 2025-04-05 12:47:06 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:10.029543 | orchestrator | 2025-04-05 12:47:10 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:10.029666 | orchestrator | 2025-04-05 12:47:10 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:13.068920 | orchestrator | 2025-04-05 12:47:13 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:16.117044 | orchestrator | 2025-04-05 12:47:13 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:16.117198 | orchestrator | 2025-04-05 12:47:16 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:19.164343 | orchestrator | 2025-04-05 12:47:16 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:19.164460 | orchestrator | 2025-04-05 12:47:19 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:22.212383 | orchestrator | 2025-04-05 12:47:19 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:22.212513 | orchestrator | 2025-04-05 12:47:22 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:25.258629 | orchestrator | 2025-04-05 12:47:22 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:25.258748 | orchestrator | 2025-04-05 12:47:25 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:28.315813 | orchestrator | 2025-04-05 12:47:25 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:28.315993 | orchestrator | 2025-04-05 12:47:28 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:31.363709 | orchestrator | 2025-04-05 12:47:28 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:31.363830 | orchestrator | 2025-04-05 12:47:31 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:34.395159 | orchestrator | 2025-04-05 12:47:31 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:34.395279 | orchestrator | 2025-04-05 12:47:34 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:37.434385 | orchestrator | 2025-04-05 12:47:34 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:37.434515 | orchestrator | 2025-04-05 12:47:37 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:40.481674 | orchestrator | 2025-04-05 12:47:37 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:40.481797 | orchestrator | 2025-04-05 12:47:40 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:43.527531 | orchestrator | 2025-04-05 12:47:40 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:43.527655 | orchestrator | 2025-04-05 12:47:43 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:46.564652 | orchestrator | 2025-04-05 12:47:43 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:46.564774 | orchestrator | 2025-04-05 12:47:46 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:49.611065 | orchestrator | 2025-04-05 12:47:46 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:49.611194 | orchestrator | 2025-04-05 12:47:49 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:52.655114 | orchestrator | 2025-04-05 12:47:49 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:52.655239 | orchestrator | 2025-04-05 12:47:52 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:55.714386 | orchestrator | 2025-04-05 12:47:52 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:55.714513 | orchestrator | 2025-04-05 12:47:55 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:47:58.755471 | orchestrator | 2025-04-05 12:47:55 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:47:58.755596 | orchestrator | 2025-04-05 12:47:58 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:01.791394 | orchestrator | 2025-04-05 12:47:58 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:01.791518 | orchestrator | 2025-04-05 12:48:01 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:04.836737 | orchestrator | 2025-04-05 12:48:01 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:04.836907 | orchestrator | 2025-04-05 12:48:04 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:07.876783 | orchestrator | 2025-04-05 12:48:04 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:07.876960 | orchestrator | 2025-04-05 12:48:07 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:10.925264 | orchestrator | 2025-04-05 12:48:07 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:10.925395 | orchestrator | 2025-04-05 12:48:10 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:13.964322 | orchestrator | 2025-04-05 12:48:10 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:13.964453 | orchestrator | 2025-04-05 12:48:13 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:17.000800 | orchestrator | 2025-04-05 12:48:13 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:17.000917 | orchestrator | 2025-04-05 12:48:16 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:20.043653 | orchestrator | 2025-04-05 12:48:16 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:20.043781 | orchestrator | 2025-04-05 12:48:20 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:23.106585 | orchestrator | 2025-04-05 12:48:20 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:23.106714 | orchestrator | 2025-04-05 12:48:23 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:26.144425 | orchestrator | 2025-04-05 12:48:23 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:26.144554 | orchestrator | 2025-04-05 12:48:26 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:29.168282 | orchestrator | 2025-04-05 12:48:26 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:29.168419 | orchestrator | 2025-04-05 12:48:29 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:32.201639 | orchestrator | 2025-04-05 12:48:29 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:32.201768 | orchestrator | 2025-04-05 12:48:32 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:35.255909 | orchestrator | 2025-04-05 12:48:32 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:35.256026 | orchestrator | 2025-04-05 12:48:35 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:38.298155 | orchestrator | 2025-04-05 12:48:35 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:38.298286 | orchestrator | 2025-04-05 12:48:38 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:41.343626 | orchestrator | 2025-04-05 12:48:38 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:41.343749 | orchestrator | 2025-04-05 12:48:41 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:44.394221 | orchestrator | 2025-04-05 12:48:41 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:44.394348 | orchestrator | 2025-04-05 12:48:44 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:47.448908 | orchestrator | 2025-04-05 12:48:44 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:47.449033 | orchestrator | 2025-04-05 12:48:47 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:50.497398 | orchestrator | 2025-04-05 12:48:47 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:50.497514 | orchestrator | 2025-04-05 12:48:50 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:53.546339 | orchestrator | 2025-04-05 12:48:50 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:53.546453 | orchestrator | 2025-04-05 12:48:53 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:56.582064 | orchestrator | 2025-04-05 12:48:53 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:56.582202 | orchestrator | 2025-04-05 12:48:56 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:48:59.619576 | orchestrator | 2025-04-05 12:48:56 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:48:59.619704 | orchestrator | 2025-04-05 12:48:59 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state STARTED 2025-04-05 12:49:02.672835 | orchestrator | 2025-04-05 12:48:59 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:49:02.673003 | orchestrator | 2025-04-05 12:49:02.674758 | orchestrator | 2025-04-05 12:49:02.674800 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:49:02.674928 | orchestrator | 2025-04-05 12:49:02.674946 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:49:02.674961 | orchestrator | Saturday 05 April 2025 12:42:59 +0000 (0:00:00.162) 0:00:00.162 ******** 2025-04-05 12:49:02.674975 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:49:02.674992 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:49:02.675007 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:49:02.675021 | orchestrator | 2025-04-05 12:49:02.675036 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:49:02.675051 | orchestrator | Saturday 05 April 2025 12:42:59 +0000 (0:00:00.505) 0:00:00.668 ******** 2025-04-05 12:49:02.675065 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-04-05 12:49:02.675080 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-04-05 12:49:02.675753 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-04-05 12:49:02.675776 | orchestrator | 2025-04-05 12:49:02.675792 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-04-05 12:49:02.675831 | orchestrator | 2025-04-05 12:49:02.675847 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-04-05 12:49:02.675887 | orchestrator | Saturday 05 April 2025 12:43:00 +0000 (0:00:00.766) 0:00:01.434 ******** 2025-04-05 12:49:02.675901 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:49:02.675915 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:49:02.675991 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:49:02.676664 | orchestrator | 2025-04-05 12:49:02.676681 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:49:02.676696 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:49:02.676711 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:49:02.676726 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:49:02.676740 | orchestrator | 2025-04-05 12:49:02.676754 | orchestrator | 2025-04-05 12:49:02.676768 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:49:02.676782 | orchestrator | Saturday 05 April 2025 12:45:10 +0000 (0:02:09.715) 0:02:11.150 ******** 2025-04-05 12:49:02.676795 | orchestrator | =============================================================================== 2025-04-05 12:49:02.676809 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 129.72s 2025-04-05 12:49:02.676823 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.77s 2025-04-05 12:49:02.676837 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.51s 2025-04-05 12:49:02.676874 | orchestrator | 2025-04-05 12:49:02.676890 | orchestrator | 2025-04-05 12:49:02 | INFO  | Task 67e4240f-50ac-4274-b6e4-4b535bf8fd24 is in state SUCCESS 2025-04-05 12:49:02.677335 | orchestrator | 2025-04-05 12:49:02.677363 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:49:02.677384 | orchestrator | 2025-04-05 12:49:02.677416 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-04-05 12:49:02.677431 | orchestrator | Saturday 05 April 2025 12:41:43 +0000 (0:00:00.446) 0:00:00.446 ******** 2025-04-05 12:49:02.677445 | orchestrator | changed: [testbed-manager] 2025-04-05 12:49:02.677460 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:49:02.677474 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:49:02.677488 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:49:02.677502 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:49:02.677516 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:49:02.677530 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:49:02.677544 | orchestrator | 2025-04-05 12:49:02.677558 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:49:02.677572 | orchestrator | Saturday 05 April 2025 12:41:43 +0000 (0:00:00.661) 0:00:01.107 ******** 2025-04-05 12:49:02.677585 | orchestrator | changed: [testbed-manager] 2025-04-05 12:49:02.677599 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:49:02.677613 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:49:02.677626 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:49:02.677640 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:49:02.677660 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:49:02.677674 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:49:02.677688 | orchestrator | 2025-04-05 12:49:02.677702 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:49:02.677716 | orchestrator | Saturday 05 April 2025 12:41:44 +0000 (0:00:01.198) 0:00:02.306 ******** 2025-04-05 12:49:02.677730 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-04-05 12:49:02.677750 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-04-05 12:49:02.677765 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-04-05 12:49:02.677792 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-04-05 12:49:02.677838 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-04-05 12:49:02.678005 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-04-05 12:49:02.678068 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-04-05 12:49:02.678085 | orchestrator | 2025-04-05 12:49:02.678100 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-04-05 12:49:02.678114 | orchestrator | 2025-04-05 12:49:02.678128 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-04-05 12:49:02.678142 | orchestrator | Saturday 05 April 2025 12:41:46 +0000 (0:00:01.538) 0:00:03.844 ******** 2025-04-05 12:49:02.678157 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:49:02.678171 | orchestrator | 2025-04-05 12:49:02.678185 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-04-05 12:49:02.678199 | orchestrator | Saturday 05 April 2025 12:41:47 +0000 (0:00:01.271) 0:00:05.115 ******** 2025-04-05 12:49:02.678213 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-04-05 12:49:02.678227 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-04-05 12:49:02.678241 | orchestrator | 2025-04-05 12:49:02.678718 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-04-05 12:49:02.678733 | orchestrator | Saturday 05 April 2025 12:41:52 +0000 (0:00:04.464) 0:00:09.580 ******** 2025-04-05 12:49:02.678745 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-05 12:49:02.678758 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-05 12:49:02.678771 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:49:02.678914 | orchestrator | 2025-04-05 12:49:02.678934 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-04-05 12:49:02.678948 | orchestrator | Saturday 05 April 2025 12:41:55 +0000 (0:00:03.804) 0:00:13.384 ******** 2025-04-05 12:49:02.678962 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:49:02.678976 | orchestrator | 2025-04-05 12:49:02.678990 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-04-05 12:49:02.679004 | orchestrator | Saturday 05 April 2025 12:41:56 +0000 (0:00:00.582) 0:00:13.967 ******** 2025-04-05 12:49:02.679171 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:49:02.679186 | orchestrator | 2025-04-05 12:49:02.679200 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-04-05 12:49:02.679213 | orchestrator | Saturday 05 April 2025 12:41:57 +0000 (0:00:01.055) 0:00:15.022 ******** 2025-04-05 12:49:02.679226 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:49:02.679239 | orchestrator | 2025-04-05 12:49:02.679252 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-05 12:49:02.679264 | orchestrator | Saturday 05 April 2025 12:41:59 +0000 (0:00:02.037) 0:00:17.060 ******** 2025-04-05 12:49:02.679277 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.679290 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.679303 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.679316 | orchestrator | 2025-04-05 12:49:02.679328 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-04-05 12:49:02.679341 | orchestrator | Saturday 05 April 2025 12:42:00 +0000 (0:00:00.850) 0:00:17.911 ******** 2025-04-05 12:49:02.679354 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:49:02.679367 | orchestrator | 2025-04-05 12:49:02.679380 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-04-05 12:49:02.679394 | orchestrator | Saturday 05 April 2025 12:42:25 +0000 (0:00:24.799) 0:00:42.710 ******** 2025-04-05 12:49:02.679406 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:49:02.679419 | orchestrator | 2025-04-05 12:49:02.679432 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-04-05 12:49:02.679445 | orchestrator | Saturday 05 April 2025 12:42:36 +0000 (0:00:11.655) 0:00:54.366 ******** 2025-04-05 12:49:02.679471 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:49:02.679484 | orchestrator | 2025-04-05 12:49:02.679497 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-04-05 12:49:02.679510 | orchestrator | Saturday 05 April 2025 12:42:46 +0000 (0:00:09.631) 0:01:03.997 ******** 2025-04-05 12:49:02.679605 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:49:02.679624 | orchestrator | 2025-04-05 12:49:02.679638 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-04-05 12:49:02.679652 | orchestrator | Saturday 05 April 2025 12:42:48 +0000 (0:00:01.900) 0:01:05.897 ******** 2025-04-05 12:49:02.679665 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.679679 | orchestrator | 2025-04-05 12:49:02.679692 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-05 12:49:02.679714 | orchestrator | Saturday 05 April 2025 12:42:49 +0000 (0:00:01.115) 0:01:07.013 ******** 2025-04-05 12:49:02.679728 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:49:02.679742 | orchestrator | 2025-04-05 12:49:02.679755 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-04-05 12:49:02.679769 | orchestrator | Saturday 05 April 2025 12:42:50 +0000 (0:00:01.337) 0:01:08.350 ******** 2025-04-05 12:49:02.679782 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:49:02.679796 | orchestrator | 2025-04-05 12:49:02.679809 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-04-05 12:49:02.679823 | orchestrator | Saturday 05 April 2025 12:43:04 +0000 (0:00:13.932) 0:01:22.283 ******** 2025-04-05 12:49:02.679836 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.679870 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.679883 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.679896 | orchestrator | 2025-04-05 12:49:02.679908 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-04-05 12:49:02.679921 | orchestrator | 2025-04-05 12:49:02.679933 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-04-05 12:49:02.679946 | orchestrator | Saturday 05 April 2025 12:43:05 +0000 (0:00:00.271) 0:01:22.554 ******** 2025-04-05 12:49:02.679958 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:49:02.679970 | orchestrator | 2025-04-05 12:49:02.679983 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-04-05 12:49:02.679995 | orchestrator | Saturday 05 April 2025 12:43:05 +0000 (0:00:00.643) 0:01:23.198 ******** 2025-04-05 12:49:02.680008 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.680020 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.680033 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:49:02.680045 | orchestrator | 2025-04-05 12:49:02.680058 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-04-05 12:49:02.680070 | orchestrator | Saturday 05 April 2025 12:43:07 +0000 (0:00:01.832) 0:01:25.031 ******** 2025-04-05 12:49:02.680082 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.680095 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.680107 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:49:02.680120 | orchestrator | 2025-04-05 12:49:02.680132 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-04-05 12:49:02.680145 | orchestrator | Saturday 05 April 2025 12:43:09 +0000 (0:00:02.104) 0:01:27.135 ******** 2025-04-05 12:49:02.680157 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.680170 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.680182 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.680195 | orchestrator | 2025-04-05 12:49:02.680208 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-04-05 12:49:02.680221 | orchestrator | Saturday 05 April 2025 12:43:10 +0000 (0:00:00.326) 0:01:27.462 ******** 2025-04-05 12:49:02.680235 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-05 12:49:02.680249 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.680272 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-05 12:49:02.680287 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.680301 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-04-05 12:49:02.680315 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-04-05 12:49:02.680329 | orchestrator | 2025-04-05 12:49:02.680343 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-04-05 12:49:02.680357 | orchestrator | Saturday 05 April 2025 12:43:17 +0000 (0:00:07.619) 0:01:35.082 ******** 2025-04-05 12:49:02.680371 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.680385 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.680400 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.680413 | orchestrator | 2025-04-05 12:49:02.680428 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-04-05 12:49:02.680442 | orchestrator | Saturday 05 April 2025 12:43:18 +0000 (0:00:00.463) 0:01:35.545 ******** 2025-04-05 12:49:02.680456 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-04-05 12:49:02.680470 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.680484 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-05 12:49:02.680498 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.680512 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-05 12:49:02.680526 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.680540 | orchestrator | 2025-04-05 12:49:02.680554 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-04-05 12:49:02.680568 | orchestrator | Saturday 05 April 2025 12:43:18 +0000 (0:00:00.722) 0:01:36.268 ******** 2025-04-05 12:49:02.680582 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.680597 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:49:02.680609 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.680621 | orchestrator | 2025-04-05 12:49:02.680634 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-04-05 12:49:02.680646 | orchestrator | Saturday 05 April 2025 12:43:19 +0000 (0:00:00.432) 0:01:36.701 ******** 2025-04-05 12:49:02.680658 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.680671 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.680683 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:49:02.680695 | orchestrator | 2025-04-05 12:49:02.680707 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-04-05 12:49:02.680720 | orchestrator | Saturday 05 April 2025 12:43:20 +0000 (0:00:00.774) 0:01:37.476 ******** 2025-04-05 12:49:02.680732 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.680745 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.680827 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:49:02.680845 | orchestrator | 2025-04-05 12:49:02.680911 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-04-05 12:49:02.680924 | orchestrator | Saturday 05 April 2025 12:43:21 +0000 (0:00:01.962) 0:01:39.438 ******** 2025-04-05 12:49:02.680937 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.680950 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.680962 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:49:02.680975 | orchestrator | 2025-04-05 12:49:02.680988 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-04-05 12:49:02.681000 | orchestrator | Saturday 05 April 2025 12:43:38 +0000 (0:00:16.184) 0:01:55.623 ******** 2025-04-05 12:49:02.681013 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.681025 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.681038 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:49:02.681056 | orchestrator | 2025-04-05 12:49:02.681067 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-04-05 12:49:02.681088 | orchestrator | Saturday 05 April 2025 12:43:48 +0000 (0:00:09.855) 0:02:05.479 ******** 2025-04-05 12:49:02.681099 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:49:02.681109 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.681126 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.681136 | orchestrator | 2025-04-05 12:49:02.681147 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-04-05 12:49:02.681157 | orchestrator | Saturday 05 April 2025 12:43:49 +0000 (0:00:01.034) 0:02:06.514 ******** 2025-04-05 12:49:02.681167 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.681177 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.681188 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:49:02.681198 | orchestrator | 2025-04-05 12:49:02.681208 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-04-05 12:49:02.681218 | orchestrator | Saturday 05 April 2025 12:43:59 +0000 (0:00:09.948) 0:02:16.462 ******** 2025-04-05 12:49:02.681228 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.681255 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.681266 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.681276 | orchestrator | 2025-04-05 12:49:02.681286 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-04-05 12:49:02.681296 | orchestrator | Saturday 05 April 2025 12:44:00 +0000 (0:00:01.473) 0:02:17.936 ******** 2025-04-05 12:49:02.681307 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.681317 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.681327 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.681337 | orchestrator | 2025-04-05 12:49:02.681347 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-04-05 12:49:02.681357 | orchestrator | 2025-04-05 12:49:02.681368 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-05 12:49:02.681378 | orchestrator | Saturday 05 April 2025 12:44:00 +0000 (0:00:00.399) 0:02:18.336 ******** 2025-04-05 12:49:02.681388 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:49:02.681399 | orchestrator | 2025-04-05 12:49:02.681409 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-04-05 12:49:02.681420 | orchestrator | Saturday 05 April 2025 12:44:01 +0000 (0:00:00.568) 0:02:18.904 ******** 2025-04-05 12:49:02.681430 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-04-05 12:49:02.681441 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-04-05 12:49:02.681451 | orchestrator | 2025-04-05 12:49:02.681463 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-04-05 12:49:02.681474 | orchestrator | Saturday 05 April 2025 12:44:04 +0000 (0:00:02.693) 0:02:21.598 ******** 2025-04-05 12:49:02.681486 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-04-05 12:49:02.681498 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-04-05 12:49:02.681510 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-04-05 12:49:02.681527 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-04-05 12:49:02.681539 | orchestrator | 2025-04-05 12:49:02.681550 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-04-05 12:49:02.681562 | orchestrator | Saturday 05 April 2025 12:44:09 +0000 (0:00:05.603) 0:02:27.201 ******** 2025-04-05 12:49:02.681574 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-05 12:49:02.681585 | orchestrator | 2025-04-05 12:49:02.681597 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-04-05 12:49:02.681608 | orchestrator | Saturday 05 April 2025 12:44:12 +0000 (0:00:02.745) 0:02:29.947 ******** 2025-04-05 12:49:02.681620 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-05 12:49:02.681632 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-04-05 12:49:02.681643 | orchestrator | 2025-04-05 12:49:02.681655 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-04-05 12:49:02.681674 | orchestrator | Saturday 05 April 2025 12:44:15 +0000 (0:00:03.317) 0:02:33.265 ******** 2025-04-05 12:49:02.681686 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-05 12:49:02.681697 | orchestrator | 2025-04-05 12:49:02.681709 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-04-05 12:49:02.681721 | orchestrator | Saturday 05 April 2025 12:44:18 +0000 (0:00:02.863) 0:02:36.128 ******** 2025-04-05 12:49:02.681732 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-04-05 12:49:02.681744 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-04-05 12:49:02.681755 | orchestrator | 2025-04-05 12:49:02.681767 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-04-05 12:49:02.681843 | orchestrator | Saturday 05 April 2025 12:44:25 +0000 (0:00:06.738) 0:02:42.867 ******** 2025-04-05 12:49:02.681876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-05 12:49:02.681892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-05 12:49:02.681903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-05 12:49:02.681976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.681993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.682004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.682037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.682050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.682061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.682082 | orchestrator | 2025-04-05 12:49:02.682093 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-04-05 12:49:02.682103 | orchestrator | Saturday 05 April 2025 12:44:26 +0000 (0:00:01.401) 0:02:44.268 ******** 2025-04-05 12:49:02.682113 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.682124 | orchestrator | 2025-04-05 12:49:02.682134 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-04-05 12:49:02.682145 | orchestrator | Saturday 05 April 2025 12:44:26 +0000 (0:00:00.115) 0:02:44.383 ******** 2025-04-05 12:49:02.682155 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.682165 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.682175 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.682186 | orchestrator | 2025-04-05 12:49:02.682196 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-04-05 12:49:02.682206 | orchestrator | Saturday 05 April 2025 12:44:27 +0000 (0:00:00.393) 0:02:44.777 ******** 2025-04-05 12:49:02.682216 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-05 12:49:02.682226 | orchestrator | 2025-04-05 12:49:02.682236 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-04-05 12:49:02.682251 | orchestrator | Saturday 05 April 2025 12:44:27 +0000 (0:00:00.363) 0:02:45.140 ******** 2025-04-05 12:49:02.682262 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.682340 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.682358 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.682369 | orchestrator | 2025-04-05 12:49:02.682380 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-05 12:49:02.682391 | orchestrator | Saturday 05 April 2025 12:44:28 +0000 (0:00:00.363) 0:02:45.504 ******** 2025-04-05 12:49:02.682402 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:49:02.682413 | orchestrator | 2025-04-05 12:49:02.682423 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-04-05 12:49:02.682434 | orchestrator | Saturday 05 April 2025 12:44:28 +0000 (0:00:00.539) 0:02:46.043 ******** 2025-04-05 12:49:02.682445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-05 12:49:02.682458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-05 12:49:02.682531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-05 12:49:02.682550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.682562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.682574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.682585 | orchestrator | 2025-04-05 12:49:02.682596 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-04-05 12:49:02.682613 | orchestrator | Saturday 05 April 2025 12:44:30 +0000 (0:00:02.116) 0:02:48.160 ******** 2025-04-05 12:49:02.682631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-05 12:49:02.682644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.682656 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.682718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-05 12:49:02.682735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.682748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-05 12:49:02.682766 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.682778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.682789 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.682800 | orchestrator | 2025-04-05 12:49:02.682811 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-04-05 12:49:02.682835 | orchestrator | Saturday 05 April 2025 12:44:31 +0000 (0:00:00.684) 0:02:48.844 ******** 2025-04-05 12:49:02.682915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-05 12:49:02.682932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.682943 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.682954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-05 12:49:02.682972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.682982 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.683044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-05 12:49:02.683060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.683071 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.683082 | orchestrator | 2025-04-05 12:49:02.683092 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-04-05 12:49:02.683102 | orchestrator | Saturday 05 April 2025 12:44:32 +0000 (0:00:01.020) 0:02:49.865 ******** 2025-04-05 12:49:02.683113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-05 12:49:02.683131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-05 12:49:02.683194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-05 12:49:02.683223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.683243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.683255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.683266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.683278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.683340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.683356 | orchestrator | 2025-04-05 12:49:02.683367 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-04-05 12:49:02.683378 | orchestrator | Saturday 05 April 2025 12:44:34 +0000 (0:00:02.221) 0:02:52.086 ******** 2025-04-05 12:49:02.683389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-05 12:49:02.683408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-05 12:49:02.683420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-05 12:49:02.683494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.683511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.683533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.683545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.683557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.683568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.683580 | orchestrator | 2025-04-05 12:49:02.683591 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-04-05 12:49:02.683602 | orchestrator | Saturday 05 April 2025 12:44:40 +0000 (0:00:05.439) 0:02:57.526 ******** 2025-04-05 12:49:02.683664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-05 12:49:02.683685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.683696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.683707 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.683718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-05 12:49:02.683729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.683806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.683823 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.683835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-05 12:49:02.683899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.683912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.683923 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.683934 | orchestrator | 2025-04-05 12:49:02.683944 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-04-05 12:49:02.683954 | orchestrator | Saturday 05 April 2025 12:44:40 +0000 (0:00:00.790) 0:02:58.316 ******** 2025-04-05 12:49:02.683964 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:49:02.683974 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:49:02.683985 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:49:02.683995 | orchestrator | 2025-04-05 12:49:02.684005 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-04-05 12:49:02.684015 | orchestrator | Saturday 05 April 2025 12:44:42 +0000 (0:00:01.773) 0:03:00.090 ******** 2025-04-05 12:49:02.684025 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.684035 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.684045 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.684055 | orchestrator | 2025-04-05 12:49:02.684066 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-04-05 12:49:02.684076 | orchestrator | Saturday 05 April 2025 12:44:43 +0000 (0:00:00.476) 0:03:00.567 ******** 2025-04-05 12:49:02.684158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-05 12:49:02.684205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-05 12:49:02.684218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-05 12:49:02.684230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.684287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.684306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.684317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.684337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.684348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.684357 | orchestrator | 2025-04-05 12:49:02.684366 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-04-05 12:49:02.684376 | orchestrator | Saturday 05 April 2025 12:44:45 +0000 (0:00:02.139) 0:03:02.707 ******** 2025-04-05 12:49:02.684385 | orchestrator | 2025-04-05 12:49:02.684395 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-04-05 12:49:02.684404 | orchestrator | Saturday 05 April 2025 12:44:45 +0000 (0:00:00.103) 0:03:02.811 ******** 2025-04-05 12:49:02.684413 | orchestrator | 2025-04-05 12:49:02.684422 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-04-05 12:49:02.684431 | orchestrator | Saturday 05 April 2025 12:44:45 +0000 (0:00:00.218) 0:03:03.029 ******** 2025-04-05 12:49:02.684440 | orchestrator | 2025-04-05 12:49:02.684450 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-04-05 12:49:02.684459 | orchestrator | Saturday 05 April 2025 12:44:45 +0000 (0:00:00.102) 0:03:03.132 ******** 2025-04-05 12:49:02.684468 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:49:02.684477 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:49:02.684491 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:49:02.684500 | orchestrator | 2025-04-05 12:49:02.684509 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-04-05 12:49:02.684518 | orchestrator | Saturday 05 April 2025 12:45:04 +0000 (0:00:18.381) 0:03:21.514 ******** 2025-04-05 12:49:02.684527 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:49:02.684536 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:49:02.684545 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:49:02.684564 | orchestrator | 2025-04-05 12:49:02.684573 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-04-05 12:49:02.684582 | orchestrator | 2025-04-05 12:49:02.684594 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-05 12:49:02.684603 | orchestrator | Saturday 05 April 2025 12:45:13 +0000 (0:00:09.617) 0:03:31.131 ******** 2025-04-05 12:49:02.684612 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:49:02.684622 | orchestrator | 2025-04-05 12:49:02.684631 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-05 12:49:02.684639 | orchestrator | Saturday 05 April 2025 12:45:15 +0000 (0:00:01.690) 0:03:32.822 ******** 2025-04-05 12:49:02.684693 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.684705 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:49:02.684714 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:49:02.684722 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.684731 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.684739 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.684748 | orchestrator | 2025-04-05 12:49:02.684757 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-04-05 12:49:02.684765 | orchestrator | Saturday 05 April 2025 12:45:16 +0000 (0:00:00.944) 0:03:33.766 ******** 2025-04-05 12:49:02.684774 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.684782 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.684791 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.684799 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:49:02.684808 | orchestrator | 2025-04-05 12:49:02.684817 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-05 12:49:02.684825 | orchestrator | Saturday 05 April 2025 12:45:17 +0000 (0:00:00.984) 0:03:34.751 ******** 2025-04-05 12:49:02.684834 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-04-05 12:49:02.684843 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-04-05 12:49:02.684867 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-04-05 12:49:02.684876 | orchestrator | 2025-04-05 12:49:02.684885 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-05 12:49:02.684893 | orchestrator | Saturday 05 April 2025 12:45:17 +0000 (0:00:00.596) 0:03:35.347 ******** 2025-04-05 12:49:02.684906 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-04-05 12:49:02.684915 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-04-05 12:49:02.684924 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-04-05 12:49:02.684933 | orchestrator | 2025-04-05 12:49:02.684941 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-05 12:49:02.684950 | orchestrator | Saturday 05 April 2025 12:45:19 +0000 (0:00:01.198) 0:03:36.545 ******** 2025-04-05 12:49:02.684958 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-04-05 12:49:02.684967 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.684979 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-04-05 12:49:02.684988 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:49:02.684997 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-04-05 12:49:02.685005 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:49:02.685014 | orchestrator | 2025-04-05 12:49:02.685028 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-04-05 12:49:02.685037 | orchestrator | Saturday 05 April 2025 12:45:19 +0000 (0:00:00.742) 0:03:37.288 ******** 2025-04-05 12:49:02.685046 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-05 12:49:02.685055 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-05 12:49:02.685063 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-04-05 12:49:02.685072 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-04-05 12:49:02.685081 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-04-05 12:49:02.685089 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.685098 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-05 12:49:02.685107 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-05 12:49:02.685115 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.685124 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-05 12:49:02.685133 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-05 12:49:02.685142 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.685150 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-04-05 12:49:02.685159 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-04-05 12:49:02.685168 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-04-05 12:49:02.685176 | orchestrator | 2025-04-05 12:49:02.685185 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-04-05 12:49:02.685194 | orchestrator | Saturday 05 April 2025 12:45:20 +0000 (0:00:00.887) 0:03:38.176 ******** 2025-04-05 12:49:02.685202 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.685211 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.685220 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.685228 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:49:02.685237 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:49:02.685246 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:49:02.685254 | orchestrator | 2025-04-05 12:49:02.685263 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-04-05 12:49:02.685272 | orchestrator | Saturday 05 April 2025 12:45:21 +0000 (0:00:00.945) 0:03:39.122 ******** 2025-04-05 12:49:02.685280 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.685289 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.685298 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.685306 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:49:02.685315 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:49:02.685324 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:49:02.685334 | orchestrator | 2025-04-05 12:49:02.685343 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-04-05 12:49:02.685357 | orchestrator | Saturday 05 April 2025 12:45:23 +0000 (0:00:01.662) 0:03:40.784 ******** 2025-04-05 12:49:02.685415 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-05 12:49:02.685437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.685448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.685460 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-05 12:49:02.685470 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-05 12:49:02.685536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-05 12:49:02.685550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.685569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.685581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.685592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.685603 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-05 12:49:02.685613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.685678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-05 12:49:02.685696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.685706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.685715 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.685724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.685733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.685742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.685796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.685823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.685833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.685842 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-05 12:49:02.685876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.685888 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.685945 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.685964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.685974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.685994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.686005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.686064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.686141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.686166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.686188 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.686208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.686279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.686292 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686302 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.686322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.686341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.686400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.686444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.686455 | orchestrator | 2025-04-05 12:49:02.686465 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-05 12:49:02.686474 | orchestrator | Saturday 05 April 2025 12:45:25 +0000 (0:00:02.309) 0:03:43.093 ******** 2025-04-05 12:49:02.686484 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:49:02.686494 | orchestrator | 2025-04-05 12:49:02.686503 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-04-05 12:49:02.686512 | orchestrator | Saturday 05 April 2025 12:45:26 +0000 (0:00:01.297) 0:03:44.391 ******** 2025-04-05 12:49:02.686522 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686531 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686592 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686624 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686642 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686725 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686752 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686780 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686925 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.686942 | orchestrator | 2025-04-05 12:49:02.686951 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-04-05 12:49:02.686960 | orchestrator | Saturday 05 April 2025 12:45:30 +0000 (0:00:03.340) 0:03:47.732 ******** 2025-04-05 12:49:02.686992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.687003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.687012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.687021 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.687036 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.687045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.687116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.687131 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:49:02.687140 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.687148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.687157 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.687169 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:49:02.687178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.687227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.687239 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.687248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.687264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.687273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.687281 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.687289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.687303 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.687311 | orchestrator | 2025-04-05 12:49:02.687319 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-04-05 12:49:02.687327 | orchestrator | Saturday 05 April 2025 12:45:31 +0000 (0:00:01.662) 0:03:49.394 ******** 2025-04-05 12:49:02.687336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.687368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.687389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.687398 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.687407 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.687422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.687431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.687440 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:49:02.687455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.687484 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.687495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.687504 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:49:02.687513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.687527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.687536 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.687545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.687554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.687570 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.687597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.687608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.687617 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.687626 | orchestrator | 2025-04-05 12:49:02.687635 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-05 12:49:02.687644 | orchestrator | Saturday 05 April 2025 12:45:34 +0000 (0:00:02.314) 0:03:51.708 ******** 2025-04-05 12:49:02.687653 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.687661 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.687670 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.687685 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-05 12:49:02.687695 | orchestrator | 2025-04-05 12:49:02.687703 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-04-05 12:49:02.687712 | orchestrator | Saturday 05 April 2025 12:45:35 +0000 (0:00:01.024) 0:03:52.733 ******** 2025-04-05 12:49:02.687720 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-05 12:49:02.687729 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-05 12:49:02.687737 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-05 12:49:02.687746 | orchestrator | 2025-04-05 12:49:02.687755 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-04-05 12:49:02.687763 | orchestrator | Saturday 05 April 2025 12:45:36 +0000 (0:00:00.791) 0:03:53.525 ******** 2025-04-05 12:49:02.687772 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-05 12:49:02.687780 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-05 12:49:02.687789 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-05 12:49:02.687797 | orchestrator | 2025-04-05 12:49:02.687806 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-04-05 12:49:02.687815 | orchestrator | Saturday 05 April 2025 12:45:36 +0000 (0:00:00.781) 0:03:54.306 ******** 2025-04-05 12:49:02.687823 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:49:02.687832 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:49:02.687841 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:49:02.687864 | orchestrator | 2025-04-05 12:49:02.687874 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-04-05 12:49:02.687883 | orchestrator | Saturday 05 April 2025 12:45:37 +0000 (0:00:00.634) 0:03:54.940 ******** 2025-04-05 12:49:02.687892 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:49:02.687902 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:49:02.687910 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:49:02.687919 | orchestrator | 2025-04-05 12:49:02.687929 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-04-05 12:49:02.687937 | orchestrator | Saturday 05 April 2025 12:45:37 +0000 (0:00:00.457) 0:03:55.398 ******** 2025-04-05 12:49:02.687947 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-04-05 12:49:02.687955 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-04-05 12:49:02.687964 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-04-05 12:49:02.687973 | orchestrator | 2025-04-05 12:49:02.687982 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-04-05 12:49:02.687991 | orchestrator | Saturday 05 April 2025 12:45:39 +0000 (0:00:01.243) 0:03:56.641 ******** 2025-04-05 12:49:02.688000 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-04-05 12:49:02.688009 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-04-05 12:49:02.688018 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-04-05 12:49:02.688027 | orchestrator | 2025-04-05 12:49:02.688036 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-04-05 12:49:02.688044 | orchestrator | Saturday 05 April 2025 12:45:40 +0000 (0:00:01.175) 0:03:57.817 ******** 2025-04-05 12:49:02.688054 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-04-05 12:49:02.688062 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-04-05 12:49:02.688071 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-04-05 12:49:02.688080 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-04-05 12:49:02.688089 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-04-05 12:49:02.688099 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-04-05 12:49:02.688108 | orchestrator | 2025-04-05 12:49:02.688118 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-04-05 12:49:02.688127 | orchestrator | Saturday 05 April 2025 12:45:45 +0000 (0:00:05.169) 0:04:02.986 ******** 2025-04-05 12:49:02.688136 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.688149 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:49:02.688158 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:49:02.688167 | orchestrator | 2025-04-05 12:49:02.688176 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-04-05 12:49:02.688185 | orchestrator | Saturday 05 April 2025 12:45:45 +0000 (0:00:00.399) 0:04:03.385 ******** 2025-04-05 12:49:02.688195 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.688223 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:49:02.688233 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:49:02.688241 | orchestrator | 2025-04-05 12:49:02.688249 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-04-05 12:49:02.688257 | orchestrator | Saturday 05 April 2025 12:45:46 +0000 (0:00:00.406) 0:04:03.792 ******** 2025-04-05 12:49:02.688265 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:49:02.688273 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:49:02.688282 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:49:02.688291 | orchestrator | 2025-04-05 12:49:02.688299 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-04-05 12:49:02.688311 | orchestrator | Saturday 05 April 2025 12:45:47 +0000 (0:00:01.443) 0:04:05.235 ******** 2025-04-05 12:49:02.688322 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-04-05 12:49:02.688331 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-04-05 12:49:02.688339 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-04-05 12:49:02.688347 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-04-05 12:49:02.688355 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-04-05 12:49:02.688363 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-04-05 12:49:02.688371 | orchestrator | 2025-04-05 12:49:02.688379 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-04-05 12:49:02.688387 | orchestrator | Saturday 05 April 2025 12:45:51 +0000 (0:00:03.244) 0:04:08.480 ******** 2025-04-05 12:49:02.688395 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-05 12:49:02.688404 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-05 12:49:02.688412 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-05 12:49:02.688420 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-05 12:49:02.688428 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:49:02.688436 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-05 12:49:02.688444 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:49:02.688453 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-05 12:49:02.688461 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:49:02.688468 | orchestrator | 2025-04-05 12:49:02.688477 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-04-05 12:49:02.688485 | orchestrator | Saturday 05 April 2025 12:45:54 +0000 (0:00:03.130) 0:04:11.610 ******** 2025-04-05 12:49:02.688493 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.688501 | orchestrator | 2025-04-05 12:49:02.688509 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-04-05 12:49:02.688517 | orchestrator | Saturday 05 April 2025 12:45:54 +0000 (0:00:00.115) 0:04:11.726 ******** 2025-04-05 12:49:02.688524 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.688532 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:49:02.688540 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:49:02.688548 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.688561 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.688569 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.688577 | orchestrator | 2025-04-05 12:49:02.688585 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-04-05 12:49:02.688593 | orchestrator | Saturday 05 April 2025 12:45:55 +0000 (0:00:00.869) 0:04:12.596 ******** 2025-04-05 12:49:02.688601 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-05 12:49:02.688609 | orchestrator | 2025-04-05 12:49:02.688617 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-04-05 12:49:02.688625 | orchestrator | Saturday 05 April 2025 12:45:55 +0000 (0:00:00.391) 0:04:12.987 ******** 2025-04-05 12:49:02.688632 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.688640 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:49:02.688648 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:49:02.688656 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.688664 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.688672 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.688680 | orchestrator | 2025-04-05 12:49:02.688688 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-04-05 12:49:02.688696 | orchestrator | Saturday 05 April 2025 12:45:56 +0000 (0:00:00.642) 0:04:13.630 ******** 2025-04-05 12:49:02.688723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.688741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.688751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.688759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.688773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.688781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.688816 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-05 12:49:02.688826 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-05 12:49:02.688835 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-05 12:49:02.688862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-05 12:49:02.688872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.688884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.688916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-05 12:49:02.688926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.688935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.688943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-05 12:49:02.688966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.688975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.688983 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-05 12:49:02.689010 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.689020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.689028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.689037 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689050 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-05 12:49:02.689059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.689075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.689083 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.689111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689121 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-05 12:49:02.689129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.689145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.689154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.689162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.689205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.689236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.689245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.689318 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.689326 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689370 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.689380 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689393 | orchestrator | 2025-04-05 12:49:02.689401 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-04-05 12:49:02.689409 | orchestrator | Saturday 05 April 2025 12:45:59 +0000 (0:00:03.579) 0:04:17.210 ******** 2025-04-05 12:49:02.689417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.689426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.689434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.689443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.689468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.689486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.689508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.689516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.689524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.689533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.689559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.689590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.689598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.689607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.689615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.689648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.689666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.689675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.689692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.689700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.689732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.689750 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.689758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-05 12:49:02.689775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.689784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.689792 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.689832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-05 12:49:02.689865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.689873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.689882 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.689891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-05 12:49:02.689942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.689950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.689959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.689967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.689984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.690052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.690066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.690074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.690083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.690091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.690099 | orchestrator | 2025-04-05 12:49:02.690107 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-04-05 12:49:02.690115 | orchestrator | Saturday 05 April 2025 12:46:06 +0000 (0:00:06.310) 0:04:23.521 ******** 2025-04-05 12:49:02.690129 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.690137 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:49:02.690145 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:49:02.690153 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.690161 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.690169 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.690176 | orchestrator | 2025-04-05 12:49:02.690184 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-04-05 12:49:02.690192 | orchestrator | Saturday 05 April 2025 12:46:07 +0000 (0:00:01.479) 0:04:25.000 ******** 2025-04-05 12:49:02.690200 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-04-05 12:49:02.690208 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-04-05 12:49:02.690216 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-04-05 12:49:02.690224 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-04-05 12:49:02.690232 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.690240 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-04-05 12:49:02.690268 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-04-05 12:49:02.690278 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-04-05 12:49:02.690286 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.690294 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-04-05 12:49:02.690305 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-04-05 12:49:02.690313 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.690322 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-04-05 12:49:02.690329 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-04-05 12:49:02.690337 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-04-05 12:49:02.690345 | orchestrator | 2025-04-05 12:49:02.690353 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-04-05 12:49:02.690361 | orchestrator | Saturday 05 April 2025 12:46:11 +0000 (0:00:04.035) 0:04:29.036 ******** 2025-04-05 12:49:02.690369 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.690377 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:49:02.690385 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:49:02.690393 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.690401 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.690409 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.690417 | orchestrator | 2025-04-05 12:49:02.690425 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-04-05 12:49:02.690433 | orchestrator | Saturday 05 April 2025 12:46:12 +0000 (0:00:00.875) 0:04:29.911 ******** 2025-04-05 12:49:02.690441 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-04-05 12:49:02.690449 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-04-05 12:49:02.690457 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-04-05 12:49:02.690465 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-04-05 12:49:02.690473 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-04-05 12:49:02.690486 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-04-05 12:49:02.690494 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.690502 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-04-05 12:49:02.690510 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-04-05 12:49:02.690518 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-04-05 12:49:02.690526 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-04-05 12:49:02.690534 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-04-05 12:49:02.690542 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.690550 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-04-05 12:49:02.690558 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.690566 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-04-05 12:49:02.690574 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-04-05 12:49:02.690582 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-04-05 12:49:02.690590 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-04-05 12:49:02.690598 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-04-05 12:49:02.690606 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-04-05 12:49:02.690614 | orchestrator | 2025-04-05 12:49:02.690622 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-04-05 12:49:02.690630 | orchestrator | Saturday 05 April 2025 12:46:19 +0000 (0:00:07.394) 0:04:37.306 ******** 2025-04-05 12:49:02.690638 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-04-05 12:49:02.690646 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-04-05 12:49:02.690654 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-04-05 12:49:02.690665 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-04-05 12:49:02.690691 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-05 12:49:02.690701 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-04-05 12:49:02.690709 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-04-05 12:49:02.690717 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-05 12:49:02.690725 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-05 12:49:02.690733 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-05 12:49:02.690741 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-05 12:49:02.690749 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-05 12:49:02.690757 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-04-05 12:49:02.690764 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.690772 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-04-05 12:49:02.690786 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.690794 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-04-05 12:49:02.690802 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.690810 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-05 12:49:02.690818 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-05 12:49:02.690826 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-05 12:49:02.690834 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-05 12:49:02.690842 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-05 12:49:02.690888 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-05 12:49:02.690898 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-05 12:49:02.690906 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-05 12:49:02.690914 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-05 12:49:02.690922 | orchestrator | 2025-04-05 12:49:02.690930 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-04-05 12:49:02.690938 | orchestrator | Saturday 05 April 2025 12:46:28 +0000 (0:00:08.653) 0:04:45.959 ******** 2025-04-05 12:49:02.690946 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.690954 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:49:02.690962 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:49:02.690970 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.690978 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.690986 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.690993 | orchestrator | 2025-04-05 12:49:02.691001 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-04-05 12:49:02.691009 | orchestrator | Saturday 05 April 2025 12:46:29 +0000 (0:00:00.631) 0:04:46.591 ******** 2025-04-05 12:49:02.691017 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.691025 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:49:02.691033 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:49:02.691041 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.691048 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.691061 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.691069 | orchestrator | 2025-04-05 12:49:02.691077 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-04-05 12:49:02.691085 | orchestrator | Saturday 05 April 2025 12:46:29 +0000 (0:00:00.662) 0:04:47.254 ******** 2025-04-05 12:49:02.691092 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.691099 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.691106 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.691113 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:49:02.691119 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:49:02.691126 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:49:02.691133 | orchestrator | 2025-04-05 12:49:02.691140 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-04-05 12:49:02.691147 | orchestrator | Saturday 05 April 2025 12:46:32 +0000 (0:00:02.714) 0:04:49.969 ******** 2025-04-05 12:49:02.691173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.691195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.691203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.691210 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.691218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.691225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.691233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.691264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.691280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.691288 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.691295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.691302 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.691310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.691317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.691346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.691354 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.691368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.691375 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.691383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.691390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.691402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.691409 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:49:02.691419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.691427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.691441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.691448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.691455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.691467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.691474 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:49:02.691485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.691493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.691500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.691507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.691522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.691529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.691540 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.691551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.691564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.691572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.691579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.691586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.691594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.691605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.691615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.691622 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.691635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.691643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.691651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.691658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.691672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.691679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.691690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.691698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.691711 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.691718 | orchestrator | 2025-04-05 12:49:02.691726 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-04-05 12:49:02.691733 | orchestrator | Saturday 05 April 2025 12:46:34 +0000 (0:00:01.902) 0:04:51.871 ******** 2025-04-05 12:49:02.691740 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-04-05 12:49:02.691747 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-04-05 12:49:02.691753 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.691760 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-04-05 12:49:02.691767 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-04-05 12:49:02.691774 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:49:02.691781 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-04-05 12:49:02.691788 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-04-05 12:49:02.691795 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:49:02.691808 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-04-05 12:49:02.691814 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-04-05 12:49:02.691821 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.691828 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-04-05 12:49:02.691835 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-04-05 12:49:02.691842 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.691860 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-04-05 12:49:02.691868 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-04-05 12:49:02.691875 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.691882 | orchestrator | 2025-04-05 12:49:02.691888 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-04-05 12:49:02.691895 | orchestrator | Saturday 05 April 2025 12:46:35 +0000 (0:00:00.913) 0:04:52.785 ******** 2025-04-05 12:49:02.691903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.691910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.691922 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-05 12:49:02.691936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-05 12:49:02.691948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.691955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.691963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-05 12:49:02.691973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-05 12:49:02.691986 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-05 12:49:02.691998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-05 12:49:02.692005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.692013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.692020 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-05 12:49:02.692030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.692038 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.692045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.692056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.692063 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-05 12:49:02.692071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.692084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.692092 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.692102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.692109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-05 12:49:02.692123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.692131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.692138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-05 12:49:02.692146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.692153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.692169 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-05 12:49:02.692177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.692188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/nova-spicehtml5proxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-05 12:49:02.692196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/nova-serialproxy:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-05 12:49:02.692203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.692210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.692223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.692234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.692241 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.692252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.692260 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.692267 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.692280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.692293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.692305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.692312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.692320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.692327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.692341 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-05 12:49:02.692351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/nova-compute-ironic:2024.1', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-05 12:49:02.692364 | orchestrator | 2025-04-05 12:49:02.692371 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-05 12:49:02.692378 | orchestrator | Saturday 05 April 2025 12:46:38 +0000 (0:00:03.301) 0:04:56.087 ******** 2025-04-05 12:49:02.692385 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.692392 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:49:02.692399 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:49:02.692406 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.692412 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.692419 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.692426 | orchestrator | 2025-04-05 12:49:02.692433 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-05 12:49:02.692440 | orchestrator | Saturday 05 April 2025 12:46:39 +0000 (0:00:01.000) 0:04:57.087 ******** 2025-04-05 12:49:02.692447 | orchestrator | 2025-04-05 12:49:02.692454 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-05 12:49:02.692464 | orchestrator | Saturday 05 April 2025 12:46:39 +0000 (0:00:00.107) 0:04:57.195 ******** 2025-04-05 12:49:02.692471 | orchestrator | 2025-04-05 12:49:02.692478 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-05 12:49:02.692485 | orchestrator | Saturday 05 April 2025 12:46:40 +0000 (0:00:00.246) 0:04:57.441 ******** 2025-04-05 12:49:02.692492 | orchestrator | 2025-04-05 12:49:02.692499 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-05 12:49:02.692506 | orchestrator | Saturday 05 April 2025 12:46:40 +0000 (0:00:00.106) 0:04:57.547 ******** 2025-04-05 12:49:02.692512 | orchestrator | 2025-04-05 12:49:02.692519 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-05 12:49:02.692526 | orchestrator | Saturday 05 April 2025 12:46:40 +0000 (0:00:00.262) 0:04:57.810 ******** 2025-04-05 12:49:02.692533 | orchestrator | 2025-04-05 12:49:02.692540 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-05 12:49:02.692546 | orchestrator | Saturday 05 April 2025 12:46:40 +0000 (0:00:00.103) 0:04:57.914 ******** 2025-04-05 12:49:02.692553 | orchestrator | 2025-04-05 12:49:02.692560 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-04-05 12:49:02.692567 | orchestrator | Saturday 05 April 2025 12:46:40 +0000 (0:00:00.259) 0:04:58.173 ******** 2025-04-05 12:49:02.692573 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:49:02.692580 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:49:02.692587 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:49:02.692594 | orchestrator | 2025-04-05 12:49:02.692601 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-04-05 12:49:02.692608 | orchestrator | Saturday 05 April 2025 12:46:52 +0000 (0:00:11.542) 0:05:09.716 ******** 2025-04-05 12:49:02.692615 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:49:02.692622 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:49:02.692629 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:49:02.692636 | orchestrator | 2025-04-05 12:49:02.692643 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-04-05 12:49:02.692649 | orchestrator | Saturday 05 April 2025 12:47:06 +0000 (0:00:14.706) 0:05:24.423 ******** 2025-04-05 12:49:02.692656 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:49:02.692663 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:49:02.692670 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:49:02.692677 | orchestrator | 2025-04-05 12:49:02.692684 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-04-05 12:49:02.692691 | orchestrator | Saturday 05 April 2025 12:47:25 +0000 (0:00:18.541) 0:05:42.964 ******** 2025-04-05 12:49:02.692698 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:49:02.692705 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:49:02.692711 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:49:02.692723 | orchestrator | 2025-04-05 12:49:02.692730 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-04-05 12:49:02.692737 | orchestrator | Saturday 05 April 2025 12:47:46 +0000 (0:00:21.125) 0:06:04.090 ******** 2025-04-05 12:49:02.692743 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-04-05 12:49:02.692750 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-04-05 12:49:02.692757 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-04-05 12:49:02.692764 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:49:02.692771 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:49:02.692778 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:49:02.692784 | orchestrator | 2025-04-05 12:49:02.692791 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-04-05 12:49:02.692798 | orchestrator | Saturday 05 April 2025 12:47:52 +0000 (0:00:06.150) 0:06:10.240 ******** 2025-04-05 12:49:02.692805 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:49:02.692812 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:49:02.692819 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:49:02.692826 | orchestrator | 2025-04-05 12:49:02.692833 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-04-05 12:49:02.692840 | orchestrator | Saturday 05 April 2025 12:47:53 +0000 (0:00:00.818) 0:06:11.058 ******** 2025-04-05 12:49:02.692846 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:49:02.692865 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:49:02.692872 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:49:02.692879 | orchestrator | 2025-04-05 12:49:02.692886 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-04-05 12:49:02.692893 | orchestrator | Saturday 05 April 2025 12:48:14 +0000 (0:00:21.013) 0:06:32.072 ******** 2025-04-05 12:49:02.692900 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.692907 | orchestrator | 2025-04-05 12:49:02.692916 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-04-05 12:49:02.692924 | orchestrator | Saturday 05 April 2025 12:48:14 +0000 (0:00:00.118) 0:06:32.191 ******** 2025-04-05 12:49:02.692931 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:49:02.692938 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.692945 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.692951 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.692962 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.692969 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-04-05 12:49:02.692976 | orchestrator | 2025-04-05 12:49:02.692983 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-04-05 12:49:02.692990 | orchestrator | Saturday 05 April 2025 12:48:22 +0000 (0:00:07.328) 0:06:39.519 ******** 2025-04-05 12:49:02.692997 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:49:02.693004 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:49:02.693010 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.693017 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.693024 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.693031 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.693038 | orchestrator | 2025-04-05 12:49:02.693045 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-04-05 12:49:02.693052 | orchestrator | Saturday 05 April 2025 12:48:30 +0000 (0:00:08.615) 0:06:48.135 ******** 2025-04-05 12:49:02.693059 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.693066 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:49:02.693073 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.693079 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.693086 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.693093 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-04-05 12:49:02.693104 | orchestrator | 2025-04-05 12:49:02.693113 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-04-05 12:49:02.693120 | orchestrator | Saturday 05 April 2025 12:48:33 +0000 (0:00:02.743) 0:06:50.878 ******** 2025-04-05 12:49:02.693127 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-04-05 12:49:02.693134 | orchestrator | 2025-04-05 12:49:02.693141 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-04-05 12:49:02.693148 | orchestrator | Saturday 05 April 2025 12:48:43 +0000 (0:00:09.567) 0:07:00.446 ******** 2025-04-05 12:49:02.693155 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-04-05 12:49:02.693162 | orchestrator | 2025-04-05 12:49:02.693169 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-04-05 12:49:02.693176 | orchestrator | Saturday 05 April 2025 12:48:44 +0000 (0:00:01.059) 0:07:01.505 ******** 2025-04-05 12:49:02.693183 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:49:02.693190 | orchestrator | 2025-04-05 12:49:02.693196 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-04-05 12:49:02.693203 | orchestrator | Saturday 05 April 2025 12:48:45 +0000 (0:00:01.035) 0:07:02.540 ******** 2025-04-05 12:49:02.693210 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-04-05 12:49:02.693217 | orchestrator | 2025-04-05 12:49:02.693224 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-04-05 12:49:02.693231 | orchestrator | Saturday 05 April 2025 12:48:53 +0000 (0:00:08.068) 0:07:10.609 ******** 2025-04-05 12:49:02.693238 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:49:02.693245 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:49:02.693252 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:49:02.693259 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:49:02.693266 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:49:02.693273 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:49:02.693280 | orchestrator | 2025-04-05 12:49:02.693286 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-04-05 12:49:02.693293 | orchestrator | 2025-04-05 12:49:02.693300 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-04-05 12:49:02.693307 | orchestrator | Saturday 05 April 2025 12:48:55 +0000 (0:00:01.957) 0:07:12.567 ******** 2025-04-05 12:49:02.693314 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:49:02.693322 | orchestrator | changed: [testbed-node-1] 2025-04-05 12:49:02.693328 | orchestrator | changed: [testbed-node-2] 2025-04-05 12:49:02.693335 | orchestrator | 2025-04-05 12:49:02.693342 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-04-05 12:49:02.693349 | orchestrator | 2025-04-05 12:49:02.693356 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-04-05 12:49:02.693363 | orchestrator | Saturday 05 April 2025 12:48:56 +0000 (0:00:01.163) 0:07:13.730 ******** 2025-04-05 12:49:02.693370 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.693377 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.693384 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.693391 | orchestrator | 2025-04-05 12:49:02.693398 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-04-05 12:49:02.693404 | orchestrator | 2025-04-05 12:49:02.693411 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-04-05 12:49:02.693418 | orchestrator | Saturday 05 April 2025 12:48:56 +0000 (0:00:00.582) 0:07:14.312 ******** 2025-04-05 12:49:02.693425 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-04-05 12:49:02.693432 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-04-05 12:49:02.693439 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-04-05 12:49:02.693446 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-04-05 12:49:02.693453 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-04-05 12:49:02.693459 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-04-05 12:49:02.693470 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:49:02.693478 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-04-05 12:49:02.693484 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-04-05 12:49:02.693491 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-04-05 12:49:02.693498 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-04-05 12:49:02.693508 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-04-05 12:49:02.693515 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-04-05 12:49:02.693522 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:49:02.693529 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-04-05 12:49:02.693536 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-04-05 12:49:02.693543 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-04-05 12:49:02.693553 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-04-05 12:49:02.693560 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-04-05 12:49:02.693567 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-04-05 12:49:02.693573 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:49:02.693581 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-04-05 12:49:02.693587 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-04-05 12:49:02.693594 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-04-05 12:49:02.693601 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-04-05 12:49:02.693608 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-04-05 12:49:02.693615 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-04-05 12:49:02.693622 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.693629 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-04-05 12:49:02.693636 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-04-05 12:49:02.693643 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-04-05 12:49:02.693650 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-04-05 12:49:02.693656 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-04-05 12:49:02.693663 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-04-05 12:49:02.693670 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.693677 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-04-05 12:49:02.693684 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-04-05 12:49:02.693691 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-04-05 12:49:02.693698 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-04-05 12:49:02.693705 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-04-05 12:49:02.693711 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-04-05 12:49:02.693718 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.693725 | orchestrator | 2025-04-05 12:49:02.693732 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-04-05 12:49:02.693739 | orchestrator | 2025-04-05 12:49:02.693746 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-04-05 12:49:02.693753 | orchestrator | Saturday 05 April 2025 12:48:58 +0000 (0:00:01.480) 0:07:15.793 ******** 2025-04-05 12:49:02.693759 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-04-05 12:49:02.693766 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-04-05 12:49:02.693773 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.693780 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-04-05 12:49:02.693787 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-04-05 12:49:02.693794 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.693804 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-04-05 12:49:02.693811 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-04-05 12:49:02.693818 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.693825 | orchestrator | 2025-04-05 12:49:02.693832 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-04-05 12:49:02.693839 | orchestrator | 2025-04-05 12:49:02.693846 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-04-05 12:49:02.693865 | orchestrator | Saturday 05 April 2025 12:48:58 +0000 (0:00:00.589) 0:07:16.382 ******** 2025-04-05 12:49:02.693872 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.693879 | orchestrator | 2025-04-05 12:49:02.693886 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-04-05 12:49:02.693893 | orchestrator | 2025-04-05 12:49:02.693900 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-04-05 12:49:02.693907 | orchestrator | Saturday 05 April 2025 12:48:59 +0000 (0:00:00.961) 0:07:17.344 ******** 2025-04-05 12:49:02.693913 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:49:02.693921 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:49:02.693928 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:49:02.693935 | orchestrator | 2025-04-05 12:49:02.693942 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:49:02.693949 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:49:02.693956 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-04-05 12:49:02.693963 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-04-05 12:49:02.693970 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-04-05 12:49:02.693980 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-04-05 12:49:05.721032 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-04-05 12:49:05.721144 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-04-05 12:49:05.721161 | orchestrator | 2025-04-05 12:49:05.721176 | orchestrator | 2025-04-05 12:49:05.721190 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:49:05.721206 | orchestrator | Saturday 05 April 2025 12:49:00 +0000 (0:00:00.661) 0:07:18.005 ******** 2025-04-05 12:49:05.721219 | orchestrator | =============================================================================== 2025-04-05 12:49:05.721233 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 24.80s 2025-04-05 12:49:05.721246 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 21.13s 2025-04-05 12:49:05.721259 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.01s 2025-04-05 12:49:05.721273 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 18.54s 2025-04-05 12:49:05.721286 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.38s 2025-04-05 12:49:05.721299 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 16.18s 2025-04-05 12:49:05.721313 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.71s 2025-04-05 12:49:05.721326 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 13.93s 2025-04-05 12:49:05.721339 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 11.66s 2025-04-05 12:49:05.721380 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.54s 2025-04-05 12:49:05.721394 | orchestrator | nova-cell : Create cell ------------------------------------------------- 9.95s 2025-04-05 12:49:05.721407 | orchestrator | nova-cell : Get a list of existing cells -------------------------------- 9.86s 2025-04-05 12:49:05.721421 | orchestrator | nova-cell : Get a list of existing cells -------------------------------- 9.63s 2025-04-05 12:49:05.721434 | orchestrator | nova : Restart nova-api container --------------------------------------- 9.62s 2025-04-05 12:49:05.721447 | orchestrator | nova-cell : Get a list of existing cells -------------------------------- 9.57s 2025-04-05 12:49:05.721460 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.65s 2025-04-05 12:49:05.721474 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.62s 2025-04-05 12:49:05.721487 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 8.07s 2025-04-05 12:49:05.721500 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.62s 2025-04-05 12:49:05.721513 | orchestrator | nova-cell : Copying over libvirt SASL configuration --------------------- 7.39s 2025-04-05 12:49:05.721527 | orchestrator | 2025-04-05 12:49:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:49:05.721558 | orchestrator | 2025-04-05 12:49:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:49:08.766923 | orchestrator | 2025-04-05 12:49:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:49:11.810814 | orchestrator | 2025-04-05 12:49:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:49:14.848231 | orchestrator | 2025-04-05 12:49:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:49:17.888447 | orchestrator | 2025-04-05 12:49:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:49:20.922847 | orchestrator | 2025-04-05 12:49:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:49:23.965487 | orchestrator | 2025-04-05 12:49:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:49:27.020930 | orchestrator | 2025-04-05 12:49:27 | INFO  | Task 44f63265-0c62-431a-ba77-fa4db4d7f7dc is in state STARTED 2025-04-05 12:49:30.068794 | orchestrator | 2025-04-05 12:49:27 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:49:30.068977 | orchestrator | 2025-04-05 12:49:30 | INFO  | Task 44f63265-0c62-431a-ba77-fa4db4d7f7dc is in state STARTED 2025-04-05 12:49:33.118231 | orchestrator | 2025-04-05 12:49:30 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:49:33.118352 | orchestrator | 2025-04-05 12:49:33 | INFO  | Task 44f63265-0c62-431a-ba77-fa4db4d7f7dc is in state STARTED 2025-04-05 12:49:36.164919 | orchestrator | 2025-04-05 12:49:33 | INFO  | Wait 1 second(s) until the next check 2025-04-05 12:49:36.165057 | orchestrator | 2025-04-05 12:49:36 | INFO  | Task 44f63265-0c62-431a-ba77-fa4db4d7f7dc is in state SUCCESS 2025-04-05 12:49:39.209180 | orchestrator | 2025-04-05 12:49:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:49:39.209336 | orchestrator | 2025-04-05 12:49:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:49:42.274619 | orchestrator | 2025-04-05 12:49:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:49:45.319581 | orchestrator | 2025-04-05 12:49:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:49:48.364979 | orchestrator | 2025-04-05 12:49:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:49:51.412421 | orchestrator | 2025-04-05 12:49:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:49:54.459458 | orchestrator | 2025-04-05 12:49:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:49:57.507722 | orchestrator | 2025-04-05 12:49:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:50:00.544115 | orchestrator | 2025-04-05 12:50:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:50:03.588924 | orchestrator | 2025-04-05 12:50:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:50:06.640838 | orchestrator | 2025-04-05 12:50:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:50:09.680758 | orchestrator | 2025-04-05 12:50:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-04-05 12:50:12.718763 | orchestrator | 2025-04-05 12:50:12.938284 | orchestrator | None 2025-04-05 12:50:12.938408 | orchestrator | 2025-04-05 12:50:12.945638 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Apr 5 12:50:12 UTC 2025 2025-04-05 12:50:12.945697 | orchestrator | 2025-04-05 12:50:23.538004 | orchestrator | changed 2025-04-05 12:50:23.828295 | 2025-04-05 12:50:23.828474 | TASK [Bootstrap services] 2025-04-05 12:50:24.465799 | orchestrator | 2025-04-05 12:50:24.474909 | orchestrator | # BOOTSTRAP 2025-04-05 12:50:24.474953 | orchestrator | 2025-04-05 12:50:24.474972 | orchestrator | + set -e 2025-04-05 12:50:24.475012 | orchestrator | + echo 2025-04-05 12:50:24.475032 | orchestrator | + echo '# BOOTSTRAP' 2025-04-05 12:50:24.475049 | orchestrator | + echo 2025-04-05 12:50:24.475074 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-04-05 12:50:24.475109 | orchestrator | + set -e 2025-04-05 12:50:26.336566 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-04-05 12:50:26.336679 | orchestrator | 2025-04-05 12:50:26 | INFO  | It takes a moment until task 059f3969-987e-41dd-a33c-06f59f0f406b (flavor-manager) has been started and output is visible here. 2025-04-05 12:50:30.350946 | orchestrator | 2025-04-05 12:50:30 | INFO  | Flavor SCS-1V-4 created 2025-04-05 12:50:31.064534 | orchestrator | 2025-04-05 12:50:31 | INFO  | Flavor SCS-2V-8 created 2025-04-05 12:50:31.733126 | orchestrator | 2025-04-05 12:50:31 | INFO  | Flavor SCS-4V-16 created 2025-04-05 12:50:31.845828 | orchestrator | 2025-04-05 12:50:31 | INFO  | Flavor SCS-8V-32 created 2025-04-05 12:50:31.956395 | orchestrator | 2025-04-05 12:50:31 | INFO  | Flavor SCS-1V-2 created 2025-04-05 12:50:32.091612 | orchestrator | 2025-04-05 12:50:32 | INFO  | Flavor SCS-2V-4 created 2025-04-05 12:50:32.231752 | orchestrator | 2025-04-05 12:50:32 | INFO  | Flavor SCS-4V-8 created 2025-04-05 12:50:32.371560 | orchestrator | 2025-04-05 12:50:32 | INFO  | Flavor SCS-8V-16 created 2025-04-05 12:50:32.512454 | orchestrator | 2025-04-05 12:50:32 | INFO  | Flavor SCS-16V-32 created 2025-04-05 12:50:32.625297 | orchestrator | 2025-04-05 12:50:32 | INFO  | Flavor SCS-1V-8 created 2025-04-05 12:50:32.719659 | orchestrator | 2025-04-05 12:50:32 | INFO  | Flavor SCS-2V-16 created 2025-04-05 12:50:32.819837 | orchestrator | 2025-04-05 12:50:32 | INFO  | Flavor SCS-4V-32 created 2025-04-05 12:50:32.917557 | orchestrator | 2025-04-05 12:50:32 | INFO  | Flavor SCS-1L-1 created 2025-04-05 12:50:33.022684 | orchestrator | 2025-04-05 12:50:33 | INFO  | Flavor SCS-2V-4-20s created 2025-04-05 12:50:33.127656 | orchestrator | 2025-04-05 12:50:33 | INFO  | Flavor SCS-4V-16-100s created 2025-04-05 12:50:33.233264 | orchestrator | 2025-04-05 12:50:33 | INFO  | Flavor SCS-1V-4-10 created 2025-04-05 12:50:33.327353 | orchestrator | 2025-04-05 12:50:33 | INFO  | Flavor SCS-2V-8-20 created 2025-04-05 12:50:33.422527 | orchestrator | 2025-04-05 12:50:33 | INFO  | Flavor SCS-4V-16-50 created 2025-04-05 12:50:33.525713 | orchestrator | 2025-04-05 12:50:33 | INFO  | Flavor SCS-8V-32-100 created 2025-04-05 12:50:33.626932 | orchestrator | 2025-04-05 12:50:33 | INFO  | Flavor SCS-1V-2-5 created 2025-04-05 12:50:33.727008 | orchestrator | 2025-04-05 12:50:33 | INFO  | Flavor SCS-2V-4-10 created 2025-04-05 12:50:33.834981 | orchestrator | 2025-04-05 12:50:33 | INFO  | Flavor SCS-4V-8-20 created 2025-04-05 12:50:33.944140 | orchestrator | 2025-04-05 12:50:33 | INFO  | Flavor SCS-8V-16-50 created 2025-04-05 12:50:34.071765 | orchestrator | 2025-04-05 12:50:34 | INFO  | Flavor SCS-16V-32-100 created 2025-04-05 12:50:34.189803 | orchestrator | 2025-04-05 12:50:34 | INFO  | Flavor SCS-1V-8-20 created 2025-04-05 12:50:34.287903 | orchestrator | 2025-04-05 12:50:34 | INFO  | Flavor SCS-2V-16-50 created 2025-04-05 12:50:34.384409 | orchestrator | 2025-04-05 12:50:34 | INFO  | Flavor SCS-4V-32-100 created 2025-04-05 12:50:34.492652 | orchestrator | 2025-04-05 12:50:34 | INFO  | Flavor SCS-1L-1-5 created 2025-04-05 12:50:36.575111 | orchestrator | 2025-04-05 12:50:36 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-04-05 12:50:36.642240 | orchestrator | 2025-04-05 12:50:36 | INFO  | Task 54209f0d-f1c9-4740-9a72-25a98d5d9738 (bootstrap-basic) was prepared for execution. 2025-04-05 12:50:40.567576 | orchestrator | 2025-04-05 12:50:36 | INFO  | It takes a moment until task 54209f0d-f1c9-4740-9a72-25a98d5d9738 (bootstrap-basic) has been started and output is visible here. 2025-04-05 12:50:40.567661 | orchestrator | 2025-04-05 12:50:40.568341 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-04-05 12:50:40.569932 | orchestrator | 2025-04-05 12:50:40.574103 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-05 12:50:42.335256 | orchestrator | Saturday 05 April 2025 12:50:40 +0000 (0:00:00.077) 0:00:00.077 ******** 2025-04-05 12:50:42.335384 | orchestrator | ok: [localhost] 2025-04-05 12:50:42.335779 | orchestrator | 2025-04-05 12:50:42.336421 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-04-05 12:50:42.337194 | orchestrator | Saturday 05 April 2025 12:50:42 +0000 (0:00:01.769) 0:00:01.847 ******** 2025-04-05 12:50:50.667432 | orchestrator | ok: [localhost] 2025-04-05 12:50:50.669613 | orchestrator | 2025-04-05 12:50:50.673318 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-04-05 12:50:50.676566 | orchestrator | Saturday 05 April 2025 12:50:50 +0000 (0:00:08.329) 0:00:10.177 ******** 2025-04-05 12:50:57.616582 | orchestrator | changed: [localhost] 2025-04-05 12:50:57.617135 | orchestrator | 2025-04-05 12:50:57.620541 | orchestrator | TASK [Get volume type local] *************************************************** 2025-04-05 12:50:57.621186 | orchestrator | Saturday 05 April 2025 12:50:57 +0000 (0:00:06.949) 0:00:17.126 ******** 2025-04-05 12:51:04.231952 | orchestrator | ok: [localhost] 2025-04-05 12:51:04.233464 | orchestrator | 2025-04-05 12:51:04.234449 | orchestrator | TASK [Create volume type local] ************************************************ 2025-04-05 12:51:04.235637 | orchestrator | Saturday 05 April 2025 12:51:04 +0000 (0:00:06.616) 0:00:23.742 ******** 2025-04-05 12:51:09.981039 | orchestrator | changed: [localhost] 2025-04-05 12:51:09.981556 | orchestrator | 2025-04-05 12:51:09.981599 | orchestrator | TASK [Create public network] *************************************************** 2025-04-05 12:51:09.982630 | orchestrator | Saturday 05 April 2025 12:51:09 +0000 (0:00:05.749) 0:00:29.492 ******** 2025-04-05 12:51:16.735130 | orchestrator | changed: [localhost] 2025-04-05 12:51:16.735463 | orchestrator | 2025-04-05 12:51:16.735631 | orchestrator | TASK [Set public network to default] ******************************************* 2025-04-05 12:51:16.735717 | orchestrator | Saturday 05 April 2025 12:51:16 +0000 (0:00:06.753) 0:00:36.246 ******** 2025-04-05 12:51:22.434971 | orchestrator | changed: [localhost] 2025-04-05 12:51:22.436845 | orchestrator | 2025-04-05 12:51:22.436955 | orchestrator | TASK [Create public subnet] **************************************************** 2025-04-05 12:51:27.238704 | orchestrator | Saturday 05 April 2025 12:51:22 +0000 (0:00:05.700) 0:00:41.946 ******** 2025-04-05 12:51:27.238851 | orchestrator | changed: [localhost] 2025-04-05 12:51:27.239339 | orchestrator | 2025-04-05 12:51:27.240170 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-04-05 12:51:27.240921 | orchestrator | Saturday 05 April 2025 12:51:27 +0000 (0:00:04.802) 0:00:46.748 ******** 2025-04-05 12:51:31.506318 | orchestrator | changed: [localhost] 2025-04-05 12:51:31.508917 | orchestrator | 2025-04-05 12:51:31.508962 | orchestrator | TASK [Create manager role] ***************************************************** 2025-04-05 12:51:31.509908 | orchestrator | Saturday 05 April 2025 12:51:31 +0000 (0:00:04.268) 0:00:51.017 ******** 2025-04-05 12:51:35.039989 | orchestrator | ok: [localhost] 2025-04-05 12:51:35.041622 | orchestrator | 2025-04-05 12:51:35.042142 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:51:35.042189 | orchestrator | 2025-04-05 12:51:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:51:35.042963 | orchestrator | 2025-04-05 12:51:35 | INFO  | Please wait and do not abort execution. 2025-04-05 12:51:35.042997 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 12:51:35.043734 | orchestrator | 2025-04-05 12:51:35.044169 | orchestrator | 2025-04-05 12:51:35.044764 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:51:35.045683 | orchestrator | Saturday 05 April 2025 12:51:35 +0000 (0:00:03.533) 0:00:54.551 ******** 2025-04-05 12:51:35.046011 | orchestrator | =============================================================================== 2025-04-05 12:51:35.046363 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.33s 2025-04-05 12:51:35.046748 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.95s 2025-04-05 12:51:35.047162 | orchestrator | Create public network --------------------------------------------------- 6.75s 2025-04-05 12:51:35.047262 | orchestrator | Get volume type local --------------------------------------------------- 6.62s 2025-04-05 12:51:35.047698 | orchestrator | Create volume type local ------------------------------------------------ 5.75s 2025-04-05 12:51:35.047989 | orchestrator | Set public network to default ------------------------------------------- 5.70s 2025-04-05 12:51:35.048417 | orchestrator | Create public subnet ---------------------------------------------------- 4.80s 2025-04-05 12:51:35.049035 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.27s 2025-04-05 12:51:35.049707 | orchestrator | Create manager role ----------------------------------------------------- 3.53s 2025-04-05 12:51:35.050156 | orchestrator | Gathering Facts --------------------------------------------------------- 1.77s 2025-04-05 12:51:37.145925 | orchestrator | 2025-04-05 12:51:37 | INFO  | It takes a moment until task 23bfb92b-8067-40ab-ae09-a24007aeac91 (image-manager) has been started and output is visible here. 2025-04-05 12:51:40.352736 | orchestrator | 2025-04-05 12:51:40 | INFO  | Processing image 'Cirros 0.6.2' 2025-04-05 12:51:40.588539 | orchestrator | 2025-04-05 12:51:40 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-04-05 12:51:40.590188 | orchestrator | 2025-04-05 12:51:40 | INFO  | Importing image Cirros 0.6.2 2025-04-05 12:51:40.590832 | orchestrator | 2025-04-05 12:51:40 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-04-05 12:51:42.037753 | orchestrator | 2025-04-05 12:51:42 | INFO  | Waiting for image to leave queued state... 2025-04-05 12:51:44.074903 | orchestrator | 2025-04-05 12:51:44 | INFO  | Waiting for import to complete... 2025-04-05 12:51:54.370254 | orchestrator | 2025-04-05 12:51:54 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-04-05 12:51:54.547327 | orchestrator | 2025-04-05 12:51:54 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-04-05 12:51:54.547986 | orchestrator | 2025-04-05 12:51:54 | INFO  | Setting internal_version = 0.6.2 2025-04-05 12:51:54.548844 | orchestrator | 2025-04-05 12:51:54 | INFO  | Setting image_original_user = cirros 2025-04-05 12:51:54.549283 | orchestrator | 2025-04-05 12:51:54 | INFO  | Adding tag os:cirros 2025-04-05 12:51:54.757135 | orchestrator | 2025-04-05 12:51:54 | INFO  | Setting property architecture: x86_64 2025-04-05 12:51:54.982814 | orchestrator | 2025-04-05 12:51:54 | INFO  | Setting property hw_disk_bus: scsi 2025-04-05 12:51:55.173839 | orchestrator | 2025-04-05 12:51:55 | INFO  | Setting property hw_rng_model: virtio 2025-04-05 12:51:55.369048 | orchestrator | 2025-04-05 12:51:55 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-04-05 12:51:55.550670 | orchestrator | 2025-04-05 12:51:55 | INFO  | Setting property hw_watchdog_action: reset 2025-04-05 12:51:55.734753 | orchestrator | 2025-04-05 12:51:55 | INFO  | Setting property hypervisor_type: qemu 2025-04-05 12:51:55.926154 | orchestrator | 2025-04-05 12:51:55 | INFO  | Setting property os_distro: cirros 2025-04-05 12:51:56.091614 | orchestrator | 2025-04-05 12:51:56 | INFO  | Setting property replace_frequency: never 2025-04-05 12:51:56.259923 | orchestrator | 2025-04-05 12:51:56 | INFO  | Setting property uuid_validity: none 2025-04-05 12:51:56.435568 | orchestrator | 2025-04-05 12:51:56 | INFO  | Setting property provided_until: none 2025-04-05 12:51:56.613349 | orchestrator | 2025-04-05 12:51:56 | INFO  | Setting property image_description: Cirros 2025-04-05 12:51:56.800751 | orchestrator | 2025-04-05 12:51:56 | INFO  | Setting property image_name: Cirros 2025-04-05 12:51:56.964258 | orchestrator | 2025-04-05 12:51:56 | INFO  | Setting property internal_version: 0.6.2 2025-04-05 12:51:57.139003 | orchestrator | 2025-04-05 12:51:57 | INFO  | Setting property image_original_user: cirros 2025-04-05 12:51:57.325098 | orchestrator | 2025-04-05 12:51:57 | INFO  | Setting property os_version: 0.6.2 2025-04-05 12:51:57.486723 | orchestrator | 2025-04-05 12:51:57 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-04-05 12:51:57.666490 | orchestrator | 2025-04-05 12:51:57 | INFO  | Setting property image_build_date: 2023-05-30 2025-04-05 12:51:57.856453 | orchestrator | 2025-04-05 12:51:57 | INFO  | Checking status of 'Cirros 0.6.2' 2025-04-05 12:51:57.857091 | orchestrator | 2025-04-05 12:51:57 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-04-05 12:51:57.857999 | orchestrator | 2025-04-05 12:51:57 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-04-05 12:51:58.036351 | orchestrator | 2025-04-05 12:51:58 | INFO  | Processing image 'Cirros 0.6.3' 2025-04-05 12:51:58.086951 | orchestrator | 2025-04-05 12:51:58 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-04-05 12:51:58.087940 | orchestrator | 2025-04-05 12:51:58 | INFO  | Importing image Cirros 0.6.3 2025-04-05 12:51:58.088479 | orchestrator | 2025-04-05 12:51:58 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-04-05 12:51:59.156498 | orchestrator | 2025-04-05 12:51:59 | INFO  | Waiting for image to leave queued state... 2025-04-05 12:52:01.341473 | orchestrator | 2025-04-05 12:52:01 | INFO  | Waiting for import to complete... 2025-04-05 12:52:11.634483 | orchestrator | 2025-04-05 12:52:11 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-04-05 12:52:11.854401 | orchestrator | 2025-04-05 12:52:11 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-04-05 12:52:11.855024 | orchestrator | 2025-04-05 12:52:11 | INFO  | Setting internal_version = 0.6.3 2025-04-05 12:52:11.855057 | orchestrator | 2025-04-05 12:52:11 | INFO  | Setting image_original_user = cirros 2025-04-05 12:52:11.855420 | orchestrator | 2025-04-05 12:52:11 | INFO  | Adding tag os:cirros 2025-04-05 12:52:12.084726 | orchestrator | 2025-04-05 12:52:12 | INFO  | Setting property architecture: x86_64 2025-04-05 12:52:12.233701 | orchestrator | 2025-04-05 12:52:12 | INFO  | Setting property hw_disk_bus: scsi 2025-04-05 12:52:12.447353 | orchestrator | 2025-04-05 12:52:12 | INFO  | Setting property hw_rng_model: virtio 2025-04-05 12:52:12.630235 | orchestrator | 2025-04-05 12:52:12 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-04-05 12:52:12.781220 | orchestrator | 2025-04-05 12:52:12 | INFO  | Setting property hw_watchdog_action: reset 2025-04-05 12:52:12.935922 | orchestrator | 2025-04-05 12:52:12 | INFO  | Setting property hypervisor_type: qemu 2025-04-05 12:52:13.112900 | orchestrator | 2025-04-05 12:52:13 | INFO  | Setting property os_distro: cirros 2025-04-05 12:52:13.289587 | orchestrator | 2025-04-05 12:52:13 | INFO  | Setting property replace_frequency: never 2025-04-05 12:52:13.449154 | orchestrator | 2025-04-05 12:52:13 | INFO  | Setting property uuid_validity: none 2025-04-05 12:52:13.623910 | orchestrator | 2025-04-05 12:52:13 | INFO  | Setting property provided_until: none 2025-04-05 12:52:13.795986 | orchestrator | 2025-04-05 12:52:13 | INFO  | Setting property image_description: Cirros 2025-04-05 12:52:13.977018 | orchestrator | 2025-04-05 12:52:13 | INFO  | Setting property image_name: Cirros 2025-04-05 12:52:14.194245 | orchestrator | 2025-04-05 12:52:14 | INFO  | Setting property internal_version: 0.6.3 2025-04-05 12:52:14.357038 | orchestrator | 2025-04-05 12:52:14 | INFO  | Setting property image_original_user: cirros 2025-04-05 12:52:14.524131 | orchestrator | 2025-04-05 12:52:14 | INFO  | Setting property os_version: 0.6.3 2025-04-05 12:52:14.701697 | orchestrator | 2025-04-05 12:52:14 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-04-05 12:52:14.861371 | orchestrator | 2025-04-05 12:52:14 | INFO  | Setting property image_build_date: 2024-09-26 2025-04-05 12:52:15.035079 | orchestrator | 2025-04-05 12:52:15 | INFO  | Checking status of 'Cirros 0.6.3' 2025-04-05 12:52:15.035442 | orchestrator | 2025-04-05 12:52:15 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-04-05 12:52:15.036139 | orchestrator | 2025-04-05 12:52:15 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-04-05 12:52:15.883302 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-04-05 12:52:17.664402 | orchestrator | 2025-04-05 12:52:17 | INFO  | date: 2025-04-05 2025-04-05 12:52:17.718245 | orchestrator | 2025-04-05 12:52:17 | INFO  | image: octavia-amphora-haproxy-2024.1.20250405.qcow2 2025-04-05 12:52:17.718301 | orchestrator | 2025-04-05 12:52:17 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250405.qcow2 2025-04-05 12:52:17.718346 | orchestrator | 2025-04-05 12:52:17 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250405.qcow2.CHECKSUM 2025-04-05 12:52:17.718378 | orchestrator | 2025-04-05 12:52:17 | INFO  | checksum: b77025c10c48f24ae489c3632487407457811ba62ca019c4a9bd851afa965be8 2025-04-05 12:52:17.788092 | orchestrator | 2025-04-05 12:52:17 | INFO  | It takes a moment until task b7305e29-bb42-409b-8f4a-b444e534acc7 (image-manager) has been started and output is visible here. 2025-04-05 12:52:19.934692 | orchestrator | 2025-04-05 12:52:19 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-04-05' 2025-04-05 12:52:19.951888 | orchestrator | 2025-04-05 12:52:19 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250405.qcow2: 200 2025-04-05 12:52:19.952625 | orchestrator | 2025-04-05 12:52:19 | INFO  | Importing image OpenStack Octavia Amphora 2025-04-05 2025-04-05 12:52:19.953893 | orchestrator | 2025-04-05 12:52:19 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250405.qcow2 2025-04-05 12:52:20.253055 | orchestrator | 2025-04-05 12:52:20 | INFO  | Waiting for image to leave queued state... 2025-04-05 12:52:22.290153 | orchestrator | 2025-04-05 12:52:22 | INFO  | Waiting for import to complete... 2025-04-05 12:52:32.368768 | orchestrator | 2025-04-05 12:52:32 | INFO  | Waiting for import to complete... 2025-04-05 12:52:42.442156 | orchestrator | 2025-04-05 12:52:42 | INFO  | Waiting for import to complete... 2025-04-05 12:52:52.513439 | orchestrator | 2025-04-05 12:52:52 | INFO  | Waiting for import to complete... 2025-04-05 12:53:02.626304 | orchestrator | 2025-04-05 12:53:02 | INFO  | Import of 'OpenStack Octavia Amphora 2025-04-05' successfully completed, reloading images 2025-04-05 12:53:02.896590 | orchestrator | 2025-04-05 12:53:02 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-04-05' 2025-04-05 12:53:02.897320 | orchestrator | 2025-04-05 12:53:02 | INFO  | Setting internal_version = 2025-04-05 2025-04-05 12:53:02.897358 | orchestrator | 2025-04-05 12:53:02 | INFO  | Setting image_original_user = ubuntu 2025-04-05 12:53:02.897968 | orchestrator | 2025-04-05 12:53:02 | INFO  | Adding tag amphora 2025-04-05 12:53:03.102408 | orchestrator | 2025-04-05 12:53:03 | INFO  | Adding tag os:ubuntu 2025-04-05 12:53:03.261721 | orchestrator | 2025-04-05 12:53:03 | INFO  | Setting property architecture: x86_64 2025-04-05 12:53:03.420977 | orchestrator | 2025-04-05 12:53:03 | INFO  | Setting property hw_disk_bus: scsi 2025-04-05 12:53:03.599884 | orchestrator | 2025-04-05 12:53:03 | INFO  | Setting property hw_rng_model: virtio 2025-04-05 12:53:03.751984 | orchestrator | 2025-04-05 12:53:03 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-04-05 12:53:03.909572 | orchestrator | 2025-04-05 12:53:03 | INFO  | Setting property hw_watchdog_action: reset 2025-04-05 12:53:04.113220 | orchestrator | 2025-04-05 12:53:04 | INFO  | Setting property hypervisor_type: qemu 2025-04-05 12:53:04.305110 | orchestrator | 2025-04-05 12:53:04 | INFO  | Setting property os_distro: ubuntu 2025-04-05 12:53:04.503604 | orchestrator | 2025-04-05 12:53:04 | INFO  | Setting property replace_frequency: quarterly 2025-04-05 12:53:04.662205 | orchestrator | 2025-04-05 12:53:04 | INFO  | Setting property uuid_validity: last-1 2025-04-05 12:53:04.838418 | orchestrator | 2025-04-05 12:53:04 | INFO  | Setting property provided_until: none 2025-04-05 12:53:05.002254 | orchestrator | 2025-04-05 12:53:04 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-04-05 12:53:05.171933 | orchestrator | 2025-04-05 12:53:05 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-04-05 12:53:05.372837 | orchestrator | 2025-04-05 12:53:05 | INFO  | Setting property internal_version: 2025-04-05 2025-04-05 12:53:05.561553 | orchestrator | 2025-04-05 12:53:05 | INFO  | Setting property image_original_user: ubuntu 2025-04-05 12:53:05.756213 | orchestrator | 2025-04-05 12:53:05 | INFO  | Setting property os_version: 2025-04-05 2025-04-05 12:53:06.080588 | orchestrator | 2025-04-05 12:53:06 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.1.20250405.qcow2 2025-04-05 12:53:06.271682 | orchestrator | 2025-04-05 12:53:06 | INFO  | Setting property image_build_date: 2025-04-05 2025-04-05 12:53:06.480056 | orchestrator | 2025-04-05 12:53:06 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-04-05' 2025-04-05 12:53:06.481566 | orchestrator | 2025-04-05 12:53:06 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-04-05' 2025-04-05 12:53:06.624587 | orchestrator | 2025-04-05 12:53:06 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-04-05 12:53:06.625288 | orchestrator | 2025-04-05 12:53:06 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-04-05 12:53:06.626235 | orchestrator | 2025-04-05 12:53:06 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-04-05 12:53:06.627692 | orchestrator | 2025-04-05 12:53:06 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-04-05 12:53:07.460137 | orchestrator | changed 2025-04-05 12:53:07.483811 | 2025-04-05 12:53:07.483914 | TASK [Run checks] 2025-04-05 12:53:08.168586 | orchestrator | + set -e 2025-04-05 12:53:08.168761 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-05 12:53:08.168787 | orchestrator | ++ export INTERACTIVE=false 2025-04-05 12:53:08.168805 | orchestrator | ++ INTERACTIVE=false 2025-04-05 12:53:08.168877 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-05 12:53:08.168898 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-05 12:53:08.168922 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-04-05 12:53:08.170008 | orchestrator | +++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2025-04-05 12:53:08.205890 | orchestrator | 2025-04-05 12:53:08.206843 | orchestrator | # CHECK 2025-04-05 12:53:08.206884 | orchestrator | 2025-04-05 12:53:08.206899 | orchestrator | ++ export MANAGER_VERSION=latest 2025-04-05 12:53:08.206914 | orchestrator | ++ MANAGER_VERSION=latest 2025-04-05 12:53:08.206928 | orchestrator | + echo 2025-04-05 12:53:08.206942 | orchestrator | + echo '# CHECK' 2025-04-05 12:53:08.206957 | orchestrator | + echo 2025-04-05 12:53:08.206972 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-04-05 12:53:08.206992 | orchestrator | ++ semver latest 5.0.0 2025-04-05 12:53:08.264199 | orchestrator | 2025-04-05 12:53:10.157736 | orchestrator | ## Containers @ testbed-manager 2025-04-05 12:53:10.157897 | orchestrator | 2025-04-05 12:53:10.157918 | orchestrator | + [[ -1 -eq -1 ]] 2025-04-05 12:53:10.157933 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-04-05 12:53:10.157946 | orchestrator | + echo 2025-04-05 12:53:10.157960 | orchestrator | + echo '## Containers @ testbed-manager' 2025-04-05 12:53:10.157974 | orchestrator | + echo 2025-04-05 12:53:10.157987 | orchestrator | + osism container testbed-manager ps 2025-04-05 12:53:10.158082 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-04-05 12:53:10.158106 | orchestrator | 674dfa3569fd registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_blackbox_exporter 2025-04-05 12:53:10.158126 | orchestrator | ae2c34e3708b registry.osism.tech/kolla/prometheus-alertmanager:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_alertmanager 2025-04-05 12:53:10.158142 | orchestrator | bc636a93bf63 registry.osism.tech/kolla/prometheus-cadvisor:2024.1 "dumb-init --single-…" 11 minutes ago Up 10 minutes prometheus_cadvisor 2025-04-05 12:53:10.158165 | orchestrator | f617c50149f3 registry.osism.tech/kolla/prometheus-node-exporter:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_node_exporter 2025-04-05 12:53:10.158179 | orchestrator | dd0005a4afec registry.osism.tech/kolla/prometheus-v2-server:2024.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_server 2025-04-05 12:53:10.158206 | orchestrator | a2246c076c7a registry.osism.tech/osism/cephclient:quincy "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes cephclient 2025-04-05 12:53:10.158220 | orchestrator | 3fe3b2e9074b registry.osism.tech/kolla/cron:2024.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-04-05 12:53:10.158233 | orchestrator | c45ce19ec6ec registry.osism.tech/kolla/kolla-toolbox:2024.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-04-05 12:53:10.158245 | orchestrator | 734944ccb147 registry.osism.tech/kolla/fluentd:2024.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-04-05 12:53:10.158283 | orchestrator | b0eb578989ca phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 30 minutes ago Up 29 minutes (healthy) 80/tcp phpmyadmin 2025-04-05 12:53:10.158296 | orchestrator | bcd916af109e registry.osism.tech/osism/openstackclient:2024.1 "/usr/bin/dumb-init …" 30 minutes ago Up 30 minutes openstackclient 2025-04-05 12:53:10.158309 | orchestrator | 54bafeae95fe registry.osism.tech/osism/homer:v25.03.3 "/bin/sh /entrypoint…" 31 minutes ago Up 30 minutes (healthy) 8080/tcp homer 2025-04-05 12:53:10.158328 | orchestrator | b449db1e976c ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 49 minutes ago Up 48 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-04-05 12:53:10.158341 | orchestrator | 2f5a97cd60fe registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 54 minutes ago Up 53 minutes (healthy) manager-inventory_reconciler-1 2025-04-05 12:53:10.158353 | orchestrator | f2b2de01c149 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 54 minutes ago Up 53 minutes (healthy) osism-ansible 2025-04-05 12:53:10.158380 | orchestrator | 4a7af89db986 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 54 minutes ago Up 53 minutes (healthy) osism-kubernetes 2025-04-05 12:53:10.158394 | orchestrator | 0ea14f167e3c registry.osism.tech/osism/kolla-ansible:2024.1 "/entrypoint.sh osis…" 54 minutes ago Up 53 minutes (healthy) kolla-ansible 2025-04-05 12:53:10.158407 | orchestrator | 27129bc1eaff registry.osism.tech/osism/ceph-ansible:quincy "/entrypoint.sh osis…" 54 minutes ago Up 53 minutes (healthy) ceph-ansible 2025-04-05 12:53:10.158419 | orchestrator | 9c836dd027a7 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 54 minutes ago Up 53 minutes (healthy) 8000/tcp manager-ara-server-1 2025-04-05 12:53:10.158432 | orchestrator | 5b387ef1128f registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-conductor-1 2025-04-05 12:53:10.158444 | orchestrator | 8e948c4b82a7 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-netbox-1 2025-04-05 12:53:10.158457 | orchestrator | cb027a761a10 mariadb:11.7.2 "docker-entrypoint.s…" 54 minutes ago Up 54 minutes (healthy) 3306/tcp manager-mariadb-1 2025-04-05 12:53:10.158474 | orchestrator | c7f5e8c9cd72 redis:7.4.2-alpine "docker-entrypoint.s…" 54 minutes ago Up 54 minutes (healthy) 6379/tcp manager-redis-1 2025-04-05 12:53:10.158494 | orchestrator | 6305912e0a97 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-flower-1 2025-04-05 12:53:10.158513 | orchestrator | af9ae8bc4105 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-watchdog-1 2025-04-05 12:53:10.158526 | orchestrator | 701f4c8b8c8d registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-beat-1 2025-04-05 12:53:10.158538 | orchestrator | bd093b9a5662 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 54 minutes ago Up 54 minutes (healthy) osismclient 2025-04-05 12:53:10.158551 | orchestrator | ec5df939f034 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-listener-1 2025-04-05 12:53:10.158564 | orchestrator | 3ef9e354731f registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-openstack-1 2025-04-05 12:53:10.158577 | orchestrator | d2ca93edc8e1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-04-05 12:53:10.158592 | orchestrator | 618655676b34 registry.osism.tech/osism/netbox:v4.2.2 "/opt/netbox/venv/bi…" 59 minutes ago Up 55 minutes (healthy) netbox-netbox-worker-1 2025-04-05 12:53:10.158616 | orchestrator | 704fc6d36097 registry.osism.tech/osism/netbox:v4.2.2 "/usr/bin/tini -- /o…" 59 minutes ago Up 59 minutes (healthy) netbox-netbox-1 2025-04-05 12:53:10.412697 | orchestrator | ce59574f47c9 postgres:16.8-alpine "docker-entrypoint.s…" 59 minutes ago Up 59 minutes (healthy) 5432/tcp netbox-postgres-1 2025-04-05 12:53:10.412808 | orchestrator | 732f8974b401 redis:7.4.2-alpine "docker-entrypoint.s…" 59 minutes ago Up 59 minutes (healthy) 6379/tcp netbox-redis-1 2025-04-05 12:53:10.412826 | orchestrator | a494d5e4b309 traefik:v3.3.5 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-04-05 12:53:10.412882 | orchestrator | 2025-04-05 12:53:12.308283 | orchestrator | ## Images @ testbed-manager 2025-04-05 12:53:12.308381 | orchestrator | 2025-04-05 12:53:12.308398 | orchestrator | + echo 2025-04-05 12:53:12.308413 | orchestrator | + echo '## Images @ testbed-manager' 2025-04-05 12:53:12.308429 | orchestrator | + echo 2025-04-05 12:53:12.308443 | orchestrator | + osism container testbed-manager images 2025-04-05 12:53:12.308474 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-04-05 12:53:12.519232 | orchestrator | registry.osism.tech/osism/homer v25.03.3 41d41a6f89eb 10 hours ago 11MB 2025-04-05 12:53:12.519295 | orchestrator | registry.osism.tech/osism/cephclient quincy 31ad76c23f51 10 hours ago 446MB 2025-04-05 12:53:12.519333 | orchestrator | registry.osism.tech/osism/osism-ansible latest 371d39b1c364 13 hours ago 553MB 2025-04-05 12:53:12.519348 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.1 6a9bae803bed 13 hours ago 575MB 2025-04-05 12:53:12.519362 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 6226134384d5 13 hours ago 1.07GB 2025-04-05 12:53:12.519379 | orchestrator | registry.osism.tech/osism/osism latest aba3d7a58647 13 hours ago 339MB 2025-04-05 12:53:12.519393 | orchestrator | registry.osism.tech/osism/ceph-ansible quincy 3438833b84d8 13 hours ago 534MB 2025-04-05 12:53:12.519407 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 47a4a06c5783 13 hours ago 278MB 2025-04-05 12:53:12.519421 | orchestrator | registry.osism.tech/kolla/cron 2024.1 d5b9b18eb6ca 4 days ago 274MB 2025-04-05 12:53:12.519435 | orchestrator | registry.osism.tech/kolla/fluentd 2024.1 a9d8718230d6 4 days ago 545MB 2025-04-05 12:53:12.519450 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.1 678b77b35148 4 days ago 651MB 2025-04-05 12:53:12.519464 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.1 efe4d8c5bf49 4 days ago 314MB 2025-04-05 12:53:12.519478 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.1 6c3a093ea6d8 4 days ago 413MB 2025-04-05 12:53:12.519492 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.1 452471b2874d 4 days ago 317MB 2025-04-05 12:53:12.519506 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.1 b565c6f7d7d6 4 days ago 366MB 2025-04-05 12:53:12.519520 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.1 8517631480bc 4 days ago 848MB 2025-04-05 12:53:12.519534 | orchestrator | registry.osism.tech/osism/netbox v4.2.2 de0f89b61971 4 days ago 817MB 2025-04-05 12:53:12.519548 | orchestrator | traefik v3.3.5 66c037adf0b4 5 days ago 221MB 2025-04-05 12:53:12.519562 | orchestrator | hashicorp/vault 1.19.0 1374b31c5b3d 4 weeks ago 502MB 2025-04-05 12:53:12.519576 | orchestrator | postgres 16.8-alpine 2875f9e036c2 5 weeks ago 275MB 2025-04-05 12:53:12.519591 | orchestrator | mariadb 11.7.2 a914eff5d2eb 7 weeks ago 336MB 2025-04-05 12:53:12.519605 | orchestrator | registry.osism.tech/osism/openstackclient 2024.1 2997541d3529 8 weeks ago 248MB 2025-04-05 12:53:12.519619 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 2 months ago 571MB 2025-04-05 12:53:12.519650 | orchestrator | redis 7.4.2-alpine 8f5c54441eb9 2 months ago 41.4MB 2025-04-05 12:53:12.519665 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 7 months ago 300MB 2025-04-05 12:53:12.519679 | orchestrator | ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 9 months ago 146MB 2025-04-05 12:53:12.519705 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-04-05 12:53:12.520216 | orchestrator | ++ semver latest 5.0.0 2025-04-05 12:53:12.559615 | orchestrator | 2025-04-05 12:53:14.519504 | orchestrator | ## Containers @ testbed-node-0 2025-04-05 12:53:14.519597 | orchestrator | 2025-04-05 12:53:14.519616 | orchestrator | + [[ -1 -eq -1 ]] 2025-04-05 12:53:14.519630 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-04-05 12:53:14.519644 | orchestrator | + echo 2025-04-05 12:53:14.519659 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-04-05 12:53:14.519674 | orchestrator | + echo 2025-04-05 12:53:14.519688 | orchestrator | + osism container testbed-node-0 ps 2025-04-05 12:53:14.519722 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-04-05 12:53:14.520031 | orchestrator | 5f0a5f6f8626 registry.osism.tech/kolla/nova-novncproxy:2024.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_novncproxy 2025-04-05 12:53:14.520053 | orchestrator | 7255bb5aa6f7 registry.osism.tech/kolla/nova-conductor:2024.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_conductor 2025-04-05 12:53:14.520067 | orchestrator | d368b70492c5 registry.osism.tech/kolla/nova-api:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-04-05 12:53:14.520082 | orchestrator | 028b32ba289f registry.osism.tech/kolla/nova-scheduler:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-04-05 12:53:14.520096 | orchestrator | 9b23171e90d5 registry.osism.tech/kolla/grafana:2024.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-04-05 12:53:14.520111 | orchestrator | f4c2b81a45a9 registry.osism.tech/kolla/glance-api:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2025-04-05 12:53:14.520125 | orchestrator | 74f27404d50e registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2025-04-05 12:53:14.520139 | orchestrator | c12001ef3ac8 registry.osism.tech/kolla/cinder-scheduler:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2025-04-05 12:53:14.520153 | orchestrator | fbd7227636cf registry.osism.tech/kolla/cinder-api:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-04-05 12:53:14.520168 | orchestrator | 795f4d034db9 registry.osism.tech/kolla/prometheus-cadvisor:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2025-04-05 12:53:14.520182 | orchestrator | cc92a0809cae registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_memcached_exporter 2025-04-05 12:53:14.520196 | orchestrator | 5c22f0e93d76 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_mysqld_exporter 2025-04-05 12:53:14.520210 | orchestrator | 7045b881dba8 registry.osism.tech/kolla/prometheus-node-exporter:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_node_exporter 2025-04-05 12:53:14.520224 | orchestrator | 6547ab1f4466 registry.osism.tech/kolla/magnum-conductor:2024.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-04-05 12:53:14.520238 | orchestrator | f66f8d9027b8 registry.osism.tech/kolla/magnum-api:2024.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-04-05 12:53:14.520255 | orchestrator | 2b42a2fde96f registry.osism.tech/kolla/neutron-server:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-04-05 12:53:14.520270 | orchestrator | a8ee3d18062d registry.osism.tech/kolla/placement-api:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2025-04-05 12:53:14.520284 | orchestrator | 9940c5a1978e registry.osism.tech/kolla/designate-worker:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2025-04-05 12:53:14.520299 | orchestrator | a2364762af7b registry.osism.tech/kolla/designate-mdns:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-04-05 12:53:14.520333 | orchestrator | 4a10da5e96a3 registry.osism.tech/kolla/designate-producer:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-04-05 12:53:14.520362 | orchestrator | 1de1136d8b62 registry.osism.tech/kolla/designate-central:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-04-05 12:53:14.520378 | orchestrator | 005dd99368e4 registry.osism.tech/kolla/designate-api:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-04-05 12:53:14.520392 | orchestrator | ace88281467e registry.osism.tech/kolla/designate-backend-bind9:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-04-05 12:53:14.520406 | orchestrator | eb90d83ff50f registry.osism.tech/kolla/barbican-worker:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-04-05 12:53:14.520420 | orchestrator | 8b0be0c2d3fc registry.osism.tech/kolla/barbican-keystone-listener:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2025-04-05 12:53:14.520434 | orchestrator | 428728615de2 registry.osism.tech/osism/ceph-daemon:quincy "/opt/ceph-container…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2025-04-05 12:53:14.520448 | orchestrator | b74236a0f9a2 registry.osism.tech/kolla/barbican-api:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-04-05 12:53:14.520463 | orchestrator | 96dec4649ef2 registry.osism.tech/kolla/keystone:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-04-05 12:53:14.520477 | orchestrator | 5623eae44ba3 registry.osism.tech/kolla/keystone-fernet:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-04-05 12:53:14.520491 | orchestrator | 46538a2a41e9 registry.osism.tech/kolla/keystone-ssh:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-04-05 12:53:14.520505 | orchestrator | 0c6bd7aef8a5 registry.osism.tech/kolla/horizon:2024.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-04-05 12:53:14.520519 | orchestrator | 6fdfc7022322 registry.osism.tech/kolla/mariadb-server:2024.1 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-04-05 12:53:14.520533 | orchestrator | 53f118c1481a registry.osism.tech/kolla/opensearch-dashboards:2024.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-04-05 12:53:14.520547 | orchestrator | 38bc3fe0b675 registry.osism.tech/osism/ceph-daemon:quincy "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2025-04-05 12:53:14.520561 | orchestrator | 6f7dc552b19c registry.osism.tech/kolla/opensearch:2024.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-04-05 12:53:14.520575 | orchestrator | e6477f680a20 registry.osism.tech/kolla/keepalived:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-04-05 12:53:14.520589 | orchestrator | 2769fc2e10a2 registry.osism.tech/kolla/proxysql:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-04-05 12:53:14.520603 | orchestrator | e0d02dc6782c registry.osism.tech/kolla/haproxy:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-04-05 12:53:14.520617 | orchestrator | 9680daefa178 registry.osism.tech/kolla/ovn-northd:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-04-05 12:53:14.520637 | orchestrator | 9fd19f771f0d registry.osism.tech/kolla/ovn-sb-db-server:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-04-05 12:53:14.520651 | orchestrator | 7d9a8918cc61 registry.osism.tech/kolla/ovn-nb-db-server:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-04-05 12:53:14.520665 | orchestrator | f02557c0af4b registry.osism.tech/osism/ceph-daemon:quincy "/opt/ceph-container…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-0 2025-04-05 12:53:14.520688 | orchestrator | 89d972a6587c registry.osism.tech/kolla/ovn-controller:2024.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-04-05 12:53:14.753680 | orchestrator | 906076000c7f registry.osism.tech/kolla/rabbitmq:2024.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-04-05 12:53:14.753731 | orchestrator | e558f27c6ece registry.osism.tech/kolla/openvswitch-vswitchd:2024.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-04-05 12:53:14.753746 | orchestrator | 6cb426c80a1b registry.osism.tech/kolla/openvswitch-db-server:2024.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-04-05 12:53:14.753761 | orchestrator | 095063333cde registry.osism.tech/kolla/redis-sentinel:2024.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-04-05 12:53:14.753775 | orchestrator | 3aa64c840e84 registry.osism.tech/kolla/redis:2024.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-04-05 12:53:14.753789 | orchestrator | fc339c4fff39 registry.osism.tech/kolla/memcached:2024.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-04-05 12:53:14.753815 | orchestrator | 331f5660f50c registry.osism.tech/kolla/cron:2024.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-04-05 12:53:14.753830 | orchestrator | 2f2baff8b320 registry.osism.tech/kolla/kolla-toolbox:2024.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-04-05 12:53:14.753843 | orchestrator | 63a68cfbf3da registry.osism.tech/kolla/fluentd:2024.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-04-05 12:53:14.753888 | orchestrator | 2025-04-05 12:53:16.570835 | orchestrator | ## Images @ testbed-node-0 2025-04-05 12:53:16.570961 | orchestrator | 2025-04-05 12:53:16.570981 | orchestrator | + echo 2025-04-05 12:53:16.570997 | orchestrator | + echo '## Images @ testbed-node-0' 2025-04-05 12:53:16.571013 | orchestrator | + echo 2025-04-05 12:53:16.571027 | orchestrator | + osism container testbed-node-0 images 2025-04-05 12:53:16.571057 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-04-05 12:53:16.571073 | orchestrator | registry.osism.tech/osism/ceph-daemon quincy f9bc1ac57693 10 hours ago 1.38GB 2025-04-05 12:53:16.571088 | orchestrator | registry.osism.tech/kolla/grafana 2024.1 5f628eb6465a 4 days ago 946MB 2025-04-05 12:53:16.571102 | orchestrator | registry.osism.tech/kolla/opensearch 2024.1 35ab75b661ca 4 days ago 1.55GB 2025-04-05 12:53:16.571118 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.1 5ef50f941bee 4 days ago 1.49GB 2025-04-05 12:53:16.571132 | orchestrator | registry.osism.tech/kolla/cron 2024.1 d5b9b18eb6ca 4 days ago 274MB 2025-04-05 12:53:16.571146 | orchestrator | registry.osism.tech/kolla/fluentd 2024.1 a9d8718230d6 4 days ago 545MB 2025-04-05 12:53:16.571161 | orchestrator | registry.osism.tech/kolla/memcached 2024.1 e6b6086df176 4 days ago 275MB 2025-04-05 12:53:16.571194 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.1 678b77b35148 4 days ago 651MB 2025-04-05 12:53:16.571209 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.1 2b6eabad6d7e 4 days ago 331MB 2025-04-05 12:53:16.571223 | orchestrator | registry.osism.tech/kolla/proxysql 2024.1 853780474ae4 4 days ago 375MB 2025-04-05 12:53:16.571237 | orchestrator | registry.osism.tech/kolla/haproxy 2024.1 29c4334a7039 4 days ago 282MB 2025-04-05 12:53:16.571251 | orchestrator | registry.osism.tech/kolla/keepalived 2024.1 0348fd01ace8 4 days ago 285MB 2025-04-05 12:53:16.571265 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.1 cf471b2ecfd3 4 days ago 460MB 2025-04-05 12:53:16.571279 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.1 5be0490a15f7 4 days ago 280MB 2025-04-05 12:53:16.571294 | orchestrator | registry.osism.tech/kolla/redis 2024.1 e5764ab8d387 4 days ago 280MB 2025-04-05 12:53:16.571308 | orchestrator | registry.osism.tech/kolla/horizon 2024.1 22a4cf191073 4 days ago 1.08GB 2025-04-05 12:53:16.571322 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.1 efe4d8c5bf49 4 days ago 314MB 2025-04-05 12:53:16.571336 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.1 54a2c852ad42 4 days ago 310MB 2025-04-05 12:53:16.571350 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.1 472b60e2ea80 4 days ago 307MB 2025-04-05 12:53:16.571364 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.1 09793de22bcc 4 days ago 301MB 2025-04-05 12:53:16.571378 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.1 b565c6f7d7d6 4 days ago 366MB 2025-04-05 12:53:16.571392 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.1 1b19d9a09ef2 4 days ago 287MB 2025-04-05 12:53:16.571407 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.1 2ec3a13378aa 4 days ago 287MB 2025-04-05 12:53:16.571423 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.1 57d657aeff71 4 days ago 910MB 2025-04-05 12:53:16.571448 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.1 aef5ecc6bb0f 4 days ago 909MB 2025-04-05 12:53:16.571464 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.1 c3fdbd8cd48e 4 days ago 1.31GB 2025-04-05 12:53:16.571480 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.1 d186cc60d813 4 days ago 1.31GB 2025-04-05 12:53:16.571496 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.1 15372b022390 4 days ago 923MB 2025-04-05 12:53:16.571511 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.1 d2e18de4937c 4 days ago 923MB 2025-04-05 12:53:16.571527 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.1 c6936a571ba0 4 days ago 923MB 2025-04-05 12:53:16.571543 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.1 a69ae543e8df 4 days ago 954MB 2025-04-05 12:53:16.571559 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.1 9a8ee3a110b6 4 days ago 975MB 2025-04-05 12:53:16.571576 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.1 a0b5772a9584 4 days ago 954MB 2025-04-05 12:53:16.571591 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.1 abb28858c1eb 4 days ago 954MB 2025-04-05 12:53:16.571607 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.1 89ed7251aaf7 4 days ago 975MB 2025-04-05 12:53:16.571623 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.1 f22211939f7e 4 days ago 1.16GB 2025-04-05 12:53:16.571649 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.1 2fe19a1d51e7 4 days ago 1.05GB 2025-04-05 12:53:16.763010 | orchestrator | registry.osism.tech/kolla/placement-api 2024.1 3c28e8ae2695 4 days ago 909MB 2025-04-05 12:53:16.763051 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.1 1c4ae4476ada 4 days ago 990MB 2025-04-05 12:53:16.763066 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.1 30cc71115409 4 days ago 968MB 2025-04-05 12:53:16.763080 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.1 05c64c5a6a36 4 days ago 959MB 2025-04-05 12:53:16.763094 | orchestrator | registry.osism.tech/kolla/keystone 2024.1 becb5db95fee 4 days ago 983MB 2025-04-05 12:53:16.763109 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.1 794484127164 4 days ago 962MB 2025-04-05 12:53:16.763123 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.1 af5b52a650f0 4 days ago 1.07GB 2025-04-05 12:53:16.763137 | orchestrator | registry.osism.tech/kolla/designate-central 2024.1 db4533ae1814 4 days ago 916MB 2025-04-05 12:53:16.763151 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.1 f936f53e1769 4 days ago 921MB 2025-04-05 12:53:16.763165 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.1 6c92840dca9b 4 days ago 916MB 2025-04-05 12:53:16.763179 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.1 b0ab6e091aec 4 days ago 916MB 2025-04-05 12:53:16.763193 | orchestrator | registry.osism.tech/kolla/designate-api 2024.1 dfb3b4388493 4 days ago 916MB 2025-04-05 12:53:16.763207 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.1 1c27702bb752 4 days ago 921MB 2025-04-05 12:53:16.763221 | orchestrator | registry.osism.tech/kolla/glance-api 2024.1 27cac3e9857d 4 days ago 1.01GB 2025-04-05 12:53:16.763235 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.1 bec5ee7ed9b1 4 days ago 907MB 2025-04-05 12:53:16.763255 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.1 45c6414a3204 4 days ago 906MB 2025-04-05 12:53:16.763269 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.1 d2306eba4acf 4 days ago 907MB 2025-04-05 12:53:16.763283 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.1 4fa61f89c042 4 days ago 907MB 2025-04-05 12:53:16.763298 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.1 b9f68b41f909 4 days ago 1.13GB 2025-04-05 12:53:16.763312 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.1 efa2e6324cd7 4 days ago 1.23GB 2025-04-05 12:53:16.763326 | orchestrator | registry.osism.tech/kolla/nova-api 2024.1 36690f91fcb5 4 days ago 1.13GB 2025-04-05 12:53:16.763340 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.1 198eb90961cc 4 days ago 1.13GB 2025-04-05 12:53:16.763354 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.1 9126da12c18d 4 days ago 802MB 2025-04-05 12:53:16.763368 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.1 ffad2b29a5f4 4 days ago 802MB 2025-04-05 12:53:16.763382 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.1 510c49fda4c2 4 days ago 802MB 2025-04-05 12:53:16.763396 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.1 068028106f4d 4 days ago 802MB 2025-04-05 12:53:16.763419 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-04-05 12:53:16.763799 | orchestrator | ++ semver latest 5.0.0 2025-04-05 12:53:16.815622 | orchestrator | 2025-04-05 12:53:18.720422 | orchestrator | ## Containers @ testbed-node-1 2025-04-05 12:53:18.720552 | orchestrator | 2025-04-05 12:53:18.720571 | orchestrator | + [[ -1 -eq -1 ]] 2025-04-05 12:53:18.720586 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-04-05 12:53:18.720601 | orchestrator | + echo 2025-04-05 12:53:18.720615 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-04-05 12:53:18.720650 | orchestrator | + echo 2025-04-05 12:53:18.720665 | orchestrator | + osism container testbed-node-1 ps 2025-04-05 12:53:18.720699 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-04-05 12:53:18.720716 | orchestrator | 8f01e19aa51b registry.osism.tech/kolla/nova-novncproxy:2024.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_novncproxy 2025-04-05 12:53:18.720744 | orchestrator | 5fd9f577b690 registry.osism.tech/kolla/nova-conductor:2024.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_conductor 2025-04-05 12:53:18.720759 | orchestrator | cd4cb0f9032c registry.osism.tech/kolla/nova-api:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-04-05 12:53:18.720774 | orchestrator | a2ecd0cb0b66 registry.osism.tech/kolla/nova-scheduler:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-04-05 12:53:18.720788 | orchestrator | 4a2a4d66486a registry.osism.tech/kolla/grafana:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-04-05 12:53:18.720802 | orchestrator | adb170e689a1 registry.osism.tech/kolla/glance-api:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2025-04-05 12:53:18.720816 | orchestrator | 2df135e83873 registry.osism.tech/kolla/cinder-scheduler:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2025-04-05 12:53:18.720833 | orchestrator | 4372bfe3e2e5 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2025-04-05 12:53:18.720848 | orchestrator | 46c5e6ac2c36 registry.osism.tech/kolla/cinder-api:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_api 2025-04-05 12:53:18.720917 | orchestrator | a1f4050e4785 registry.osism.tech/kolla/prometheus-cadvisor:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2025-04-05 12:53:18.720933 | orchestrator | fecfc09e5c17 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_memcached_exporter 2025-04-05 12:53:18.720947 | orchestrator | c6c56a0e9ddb registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_mysqld_exporter 2025-04-05 12:53:18.720962 | orchestrator | ebc029af6e87 registry.osism.tech/kolla/prometheus-node-exporter:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_node_exporter 2025-04-05 12:53:18.720976 | orchestrator | 4ad3f0242242 registry.osism.tech/kolla/magnum-conductor:2024.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-04-05 12:53:18.720990 | orchestrator | 21b68d56a0b4 registry.osism.tech/kolla/magnum-api:2024.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-04-05 12:53:18.721007 | orchestrator | 67e3df156088 registry.osism.tech/kolla/neutron-server:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-04-05 12:53:18.721023 | orchestrator | 622b4ac82214 registry.osism.tech/kolla/placement-api:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2025-04-05 12:53:18.721040 | orchestrator | 122d0b038f34 registry.osism.tech/kolla/designate-worker:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2025-04-05 12:53:18.721065 | orchestrator | 1c846b2d18b3 registry.osism.tech/kolla/designate-mdns:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-04-05 12:53:18.721082 | orchestrator | 2c30345eb7f2 registry.osism.tech/kolla/designate-producer:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-04-05 12:53:18.721098 | orchestrator | 1ce381447043 registry.osism.tech/kolla/designate-central:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-04-05 12:53:18.721127 | orchestrator | 4e3359f62f8b registry.osism.tech/kolla/designate-api:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-04-05 12:53:18.721145 | orchestrator | 165b73d412ad registry.osism.tech/kolla/designate-backend-bind9:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-04-05 12:53:18.721161 | orchestrator | 0a7cb67adf86 registry.osism.tech/kolla/barbican-worker:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-04-05 12:53:18.721179 | orchestrator | 5b3f1cf62509 registry.osism.tech/kolla/barbican-keystone-listener:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-04-05 12:53:18.721195 | orchestrator | 9a27457ee541 registry.osism.tech/osism/ceph-daemon:quincy "/opt/ceph-container…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2025-04-05 12:53:18.721214 | orchestrator | fe047e11a167 registry.osism.tech/kolla/barbican-api:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-04-05 12:53:18.721230 | orchestrator | 82e59c4406ae registry.osism.tech/kolla/keystone:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-04-05 12:53:18.721246 | orchestrator | 54a1a152f7e9 registry.osism.tech/kolla/keystone-fernet:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-04-05 12:53:18.721262 | orchestrator | 3a5f9b3e5e06 registry.osism.tech/kolla/horizon:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-04-05 12:53:18.721278 | orchestrator | 8c80b2b5eb95 registry.osism.tech/kolla/keystone-ssh:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-04-05 12:53:18.721293 | orchestrator | fd74cdc707bf registry.osism.tech/kolla/opensearch-dashboards:2024.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-04-05 12:53:18.721309 | orchestrator | 383ab29557d5 registry.osism.tech/kolla/mariadb-server:2024.1 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-04-05 12:53:18.721325 | orchestrator | 93c69a9e5f51 registry.osism.tech/kolla/opensearch:2024.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-04-05 12:53:18.721340 | orchestrator | 995fed3084a7 registry.osism.tech/osism/ceph-daemon:quincy "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2025-04-05 12:53:18.721355 | orchestrator | fedf8e0799ca registry.osism.tech/kolla/keepalived:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-04-05 12:53:18.721369 | orchestrator | a3721d5ad683 registry.osism.tech/kolla/proxysql:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-04-05 12:53:18.721383 | orchestrator | b8fb6b0a9d73 registry.osism.tech/kolla/haproxy:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-04-05 12:53:18.721404 | orchestrator | 558d90d945b3 registry.osism.tech/kolla/ovn-northd:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-04-05 12:53:18.721419 | orchestrator | 1667b60150a1 registry.osism.tech/kolla/ovn-sb-db-server:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-04-05 12:53:18.721433 | orchestrator | b74c745b6067 registry.osism.tech/kolla/ovn-nb-db-server:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-04-05 12:53:18.721447 | orchestrator | 96f6380782e2 registry.osism.tech/osism/ceph-daemon:quincy "/opt/ceph-container…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-1 2025-04-05 12:53:18.721461 | orchestrator | 87f8b3ba7d7f registry.osism.tech/kolla/ovn-controller:2024.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-04-05 12:53:18.721475 | orchestrator | 611246e79bc8 registry.osism.tech/kolla/rabbitmq:2024.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-04-05 12:53:18.721495 | orchestrator | be0906023c88 registry.osism.tech/kolla/openvswitch-vswitchd:2024.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-04-05 12:53:18.932832 | orchestrator | d2e0fec622af registry.osism.tech/kolla/openvswitch-db-server:2024.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-04-05 12:53:18.932978 | orchestrator | 23152887126a registry.osism.tech/kolla/redis-sentinel:2024.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-04-05 12:53:18.932998 | orchestrator | 173e614bb8df registry.osism.tech/kolla/redis:2024.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-04-05 12:53:18.933027 | orchestrator | 4cee0b2ea7dd registry.osism.tech/kolla/memcached:2024.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-04-05 12:53:18.933042 | orchestrator | 3146a58672ce registry.osism.tech/kolla/cron:2024.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-04-05 12:53:18.933056 | orchestrator | 268d453fa971 registry.osism.tech/kolla/kolla-toolbox:2024.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-04-05 12:53:18.933070 | orchestrator | 97f984cd893b registry.osism.tech/kolla/fluentd:2024.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-04-05 12:53:18.933100 | orchestrator | 2025-04-05 12:53:20.821335 | orchestrator | ## Images @ testbed-node-1 2025-04-05 12:53:20.821432 | orchestrator | 2025-04-05 12:53:20.821449 | orchestrator | + echo 2025-04-05 12:53:20.821463 | orchestrator | + echo '## Images @ testbed-node-1' 2025-04-05 12:53:20.821480 | orchestrator | + echo 2025-04-05 12:53:20.821494 | orchestrator | + osism container testbed-node-1 images 2025-04-05 12:53:20.821525 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-04-05 12:53:20.821541 | orchestrator | registry.osism.tech/osism/ceph-daemon quincy f9bc1ac57693 10 hours ago 1.38GB 2025-04-05 12:53:20.821556 | orchestrator | registry.osism.tech/kolla/grafana 2024.1 5f628eb6465a 4 days ago 946MB 2025-04-05 12:53:20.821570 | orchestrator | registry.osism.tech/kolla/opensearch 2024.1 35ab75b661ca 4 days ago 1.55GB 2025-04-05 12:53:20.821584 | orchestrator | registry.osism.tech/kolla/cron 2024.1 d5b9b18eb6ca 4 days ago 274MB 2025-04-05 12:53:20.821601 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.1 5ef50f941bee 4 days ago 1.49GB 2025-04-05 12:53:20.821636 | orchestrator | registry.osism.tech/kolla/fluentd 2024.1 a9d8718230d6 4 days ago 545MB 2025-04-05 12:53:20.821651 | orchestrator | registry.osism.tech/kolla/memcached 2024.1 e6b6086df176 4 days ago 275MB 2025-04-05 12:53:20.821665 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.1 678b77b35148 4 days ago 651MB 2025-04-05 12:53:20.821679 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.1 2b6eabad6d7e 4 days ago 331MB 2025-04-05 12:53:20.821693 | orchestrator | registry.osism.tech/kolla/proxysql 2024.1 853780474ae4 4 days ago 375MB 2025-04-05 12:53:20.821707 | orchestrator | registry.osism.tech/kolla/haproxy 2024.1 29c4334a7039 4 days ago 282MB 2025-04-05 12:53:20.821721 | orchestrator | registry.osism.tech/kolla/keepalived 2024.1 0348fd01ace8 4 days ago 285MB 2025-04-05 12:53:20.821734 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.1 cf471b2ecfd3 4 days ago 460MB 2025-04-05 12:53:20.821748 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.1 5be0490a15f7 4 days ago 280MB 2025-04-05 12:53:20.821762 | orchestrator | registry.osism.tech/kolla/redis 2024.1 e5764ab8d387 4 days ago 280MB 2025-04-05 12:53:20.821776 | orchestrator | registry.osism.tech/kolla/horizon 2024.1 22a4cf191073 4 days ago 1.08GB 2025-04-05 12:53:20.821800 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.1 efe4d8c5bf49 4 days ago 314MB 2025-04-05 12:53:20.821815 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.1 54a2c852ad42 4 days ago 310MB 2025-04-05 12:53:20.821829 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.1 472b60e2ea80 4 days ago 307MB 2025-04-05 12:53:20.821843 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.1 09793de22bcc 4 days ago 301MB 2025-04-05 12:53:20.821904 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.1 b565c6f7d7d6 4 days ago 366MB 2025-04-05 12:53:20.821921 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.1 1b19d9a09ef2 4 days ago 287MB 2025-04-05 12:53:20.821937 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.1 2ec3a13378aa 4 days ago 287MB 2025-04-05 12:53:20.821952 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.1 c3fdbd8cd48e 4 days ago 1.31GB 2025-04-05 12:53:20.821968 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.1 d186cc60d813 4 days ago 1.31GB 2025-04-05 12:53:20.821983 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.1 15372b022390 4 days ago 923MB 2025-04-05 12:53:20.821999 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.1 d2e18de4937c 4 days ago 923MB 2025-04-05 12:53:20.822060 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.1 c6936a571ba0 4 days ago 923MB 2025-04-05 12:53:20.822080 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.1 f22211939f7e 4 days ago 1.16GB 2025-04-05 12:53:20.822095 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.1 2fe19a1d51e7 4 days ago 1.05GB 2025-04-05 12:53:20.822110 | orchestrator | registry.osism.tech/kolla/placement-api 2024.1 3c28e8ae2695 4 days ago 909MB 2025-04-05 12:53:20.822123 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.1 05c64c5a6a36 4 days ago 959MB 2025-04-05 12:53:20.822137 | orchestrator | registry.osism.tech/kolla/keystone 2024.1 becb5db95fee 4 days ago 983MB 2025-04-05 12:53:20.822151 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.1 794484127164 4 days ago 962MB 2025-04-05 12:53:20.822165 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.1 af5b52a650f0 4 days ago 1.07GB 2025-04-05 12:53:20.822188 | orchestrator | registry.osism.tech/kolla/designate-central 2024.1 db4533ae1814 4 days ago 916MB 2025-04-05 12:53:20.822214 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.1 f936f53e1769 4 days ago 921MB 2025-04-05 12:53:21.030183 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.1 6c92840dca9b 4 days ago 916MB 2025-04-05 12:53:21.030277 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.1 b0ab6e091aec 4 days ago 916MB 2025-04-05 12:53:21.030294 | orchestrator | registry.osism.tech/kolla/designate-api 2024.1 dfb3b4388493 4 days ago 916MB 2025-04-05 12:53:21.030309 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.1 1c27702bb752 4 days ago 921MB 2025-04-05 12:53:21.030323 | orchestrator | registry.osism.tech/kolla/glance-api 2024.1 27cac3e9857d 4 days ago 1.01GB 2025-04-05 12:53:21.030338 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.1 b9f68b41f909 4 days ago 1.13GB 2025-04-05 12:53:21.030353 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.1 efa2e6324cd7 4 days ago 1.23GB 2025-04-05 12:53:21.030367 | orchestrator | registry.osism.tech/kolla/nova-api 2024.1 36690f91fcb5 4 days ago 1.13GB 2025-04-05 12:53:21.030448 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.1 198eb90961cc 4 days ago 1.13GB 2025-04-05 12:53:21.030466 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.1 9126da12c18d 4 days ago 802MB 2025-04-05 12:53:21.030482 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.1 ffad2b29a5f4 4 days ago 802MB 2025-04-05 12:53:21.030499 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.1 510c49fda4c2 4 days ago 802MB 2025-04-05 12:53:21.030514 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.1 068028106f4d 4 days ago 802MB 2025-04-05 12:53:21.030544 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-04-05 12:53:21.030740 | orchestrator | ++ semver latest 5.0.0 2025-04-05 12:53:21.074779 | orchestrator | 2025-04-05 12:53:22.984043 | orchestrator | ## Containers @ testbed-node-2 2025-04-05 12:53:22.984156 | orchestrator | 2025-04-05 12:53:22.984174 | orchestrator | + [[ -1 -eq -1 ]] 2025-04-05 12:53:22.984189 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-04-05 12:53:22.984203 | orchestrator | + echo 2025-04-05 12:53:22.984217 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-04-05 12:53:22.984233 | orchestrator | + echo 2025-04-05 12:53:22.984247 | orchestrator | + osism container testbed-node-2 ps 2025-04-05 12:53:22.984280 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-04-05 12:53:22.984297 | orchestrator | 6dde2edc0e1a registry.osism.tech/kolla/nova-novncproxy:2024.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_novncproxy 2025-04-05 12:53:22.984313 | orchestrator | 5e222ce6b460 registry.osism.tech/kolla/nova-conductor:2024.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_conductor 2025-04-05 12:53:22.984327 | orchestrator | 5f434b09fbf6 registry.osism.tech/kolla/nova-api:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-04-05 12:53:22.984341 | orchestrator | 740f4d1e4cfd registry.osism.tech/kolla/nova-scheduler:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-04-05 12:53:22.984355 | orchestrator | e9d7edd7c822 registry.osism.tech/kolla/grafana:2024.1 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-04-05 12:53:22.984370 | orchestrator | feb0a3de621c registry.osism.tech/kolla/glance-api:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2025-04-05 12:53:22.984405 | orchestrator | 6278040565da registry.osism.tech/kolla/cinder-scheduler:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2025-04-05 12:53:22.984420 | orchestrator | fd8a1e2222fe registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2025-04-05 12:53:22.984434 | orchestrator | 8e6ae19fff91 registry.osism.tech/kolla/cinder-api:2024.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_api 2025-04-05 12:53:22.984449 | orchestrator | 105c19b92bbd registry.osism.tech/kolla/prometheus-cadvisor:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2025-04-05 12:53:22.984463 | orchestrator | a38139ed5e20 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_memcached_exporter 2025-04-05 12:53:22.984477 | orchestrator | 4975518f8627 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_mysqld_exporter 2025-04-05 12:53:22.984491 | orchestrator | 35f8f691d64d registry.osism.tech/kolla/prometheus-node-exporter:2024.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_node_exporter 2025-04-05 12:53:22.984513 | orchestrator | af939fca7ced registry.osism.tech/kolla/magnum-conductor:2024.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-04-05 12:53:22.984528 | orchestrator | c920249f45d1 registry.osism.tech/kolla/magnum-api:2024.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-04-05 12:53:22.984542 | orchestrator | 75002b01dd99 registry.osism.tech/kolla/neutron-server:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-04-05 12:53:22.984556 | orchestrator | 464b014a48bc registry.osism.tech/kolla/placement-api:2024.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2025-04-05 12:53:22.984572 | orchestrator | 3694bcdee764 registry.osism.tech/kolla/designate-worker:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-04-05 12:53:22.984588 | orchestrator | e48a2d308058 registry.osism.tech/kolla/designate-mdns:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-04-05 12:53:22.984603 | orchestrator | 9d7fdfa52e1c registry.osism.tech/kolla/designate-producer:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-04-05 12:53:22.984618 | orchestrator | e79699ed8a2b registry.osism.tech/kolla/designate-central:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-04-05 12:53:22.984646 | orchestrator | 9919a2baca23 registry.osism.tech/kolla/designate-api:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-04-05 12:53:22.984663 | orchestrator | e1005791780c registry.osism.tech/kolla/designate-backend-bind9:2024.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-04-05 12:53:22.984679 | orchestrator | 8eebe6b71b1c registry.osism.tech/kolla/barbican-worker:2024.1 "dumb-init --single-…" 16 minutes ago Up 15 minutes (healthy) barbican_worker 2025-04-05 12:53:22.984694 | orchestrator | ac2b5345630c registry.osism.tech/kolla/barbican-keystone-listener:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-04-05 12:53:22.984710 | orchestrator | 3445c689584e registry.osism.tech/osism/ceph-daemon:quincy "/opt/ceph-container…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2025-04-05 12:53:22.984732 | orchestrator | 758ffd655297 registry.osism.tech/kolla/barbican-api:2024.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-04-05 12:53:22.984748 | orchestrator | b386767ed534 registry.osism.tech/kolla/keystone:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-04-05 12:53:22.984764 | orchestrator | c20b13d4ed65 registry.osism.tech/kolla/keystone-fernet:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-04-05 12:53:22.984780 | orchestrator | 43fa239d8368 registry.osism.tech/kolla/horizon:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-04-05 12:53:22.984795 | orchestrator | a6ed64016c33 registry.osism.tech/kolla/keystone-ssh:2024.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-04-05 12:53:22.984811 | orchestrator | cd8e5d5d0ef8 registry.osism.tech/kolla/mariadb-server:2024.1 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-04-05 12:53:22.984826 | orchestrator | 99575d0a7612 registry.osism.tech/kolla/opensearch-dashboards:2024.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-04-05 12:53:22.984842 | orchestrator | a36153823860 registry.osism.tech/kolla/opensearch:2024.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-04-05 12:53:22.984885 | orchestrator | 8d6255fa0866 registry.osism.tech/osism/ceph-daemon:quincy "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2025-04-05 12:53:22.984902 | orchestrator | 99aab2037579 registry.osism.tech/kolla/keepalived:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-04-05 12:53:22.984917 | orchestrator | fb13d58a8c2a registry.osism.tech/kolla/proxysql:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-04-05 12:53:22.984932 | orchestrator | 64dc68703e8f registry.osism.tech/kolla/haproxy:2024.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-04-05 12:53:22.984946 | orchestrator | 0f45e3a12268 registry.osism.tech/osism/ceph-daemon:quincy "/opt/ceph-container…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-2 2025-04-05 12:53:22.984961 | orchestrator | 37d24c28fc5c registry.osism.tech/kolla/ovn-northd:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-04-05 12:53:22.984975 | orchestrator | 5218b574628f registry.osism.tech/kolla/ovn-sb-db-server:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-04-05 12:53:22.984989 | orchestrator | 4ac96f326498 registry.osism.tech/kolla/ovn-nb-db-server:2024.1 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-04-05 12:53:22.985007 | orchestrator | c33bc1ded6de registry.osism.tech/kolla/rabbitmq:2024.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-04-05 12:53:22.985025 | orchestrator | 485fe77fda94 registry.osism.tech/kolla/ovn-controller:2024.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-04-05 12:53:22.985046 | orchestrator | c77f170812c0 registry.osism.tech/kolla/openvswitch-vswitchd:2024.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-04-05 12:53:23.184988 | orchestrator | a7cdee4be6a0 registry.osism.tech/kolla/openvswitch-db-server:2024.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-04-05 12:53:23.185092 | orchestrator | f47cb8d1da20 registry.osism.tech/kolla/redis-sentinel:2024.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-04-05 12:53:23.185113 | orchestrator | 9013c90434bd registry.osism.tech/kolla/redis:2024.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-04-05 12:53:23.185129 | orchestrator | b862e9909bbb registry.osism.tech/kolla/memcached:2024.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-04-05 12:53:23.185144 | orchestrator | d6063bed69cd registry.osism.tech/kolla/cron:2024.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-04-05 12:53:23.185159 | orchestrator | 6c6e6271406e registry.osism.tech/kolla/kolla-toolbox:2024.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-04-05 12:53:23.185173 | orchestrator | 47bc27325bea registry.osism.tech/kolla/fluentd:2024.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-04-05 12:53:23.185202 | orchestrator | 2025-04-05 12:53:25.004044 | orchestrator | ## Images @ testbed-node-2 2025-04-05 12:53:25.004155 | orchestrator | 2025-04-05 12:53:25.004174 | orchestrator | + echo 2025-04-05 12:53:25.004189 | orchestrator | + echo '## Images @ testbed-node-2' 2025-04-05 12:53:25.004205 | orchestrator | + echo 2025-04-05 12:53:25.004220 | orchestrator | + osism container testbed-node-2 images 2025-04-05 12:53:25.004251 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-04-05 12:53:25.004269 | orchestrator | registry.osism.tech/osism/ceph-daemon quincy f9bc1ac57693 10 hours ago 1.38GB 2025-04-05 12:53:25.004284 | orchestrator | registry.osism.tech/kolla/grafana 2024.1 5f628eb6465a 4 days ago 946MB 2025-04-05 12:53:25.004299 | orchestrator | registry.osism.tech/kolla/opensearch 2024.1 35ab75b661ca 4 days ago 1.55GB 2025-04-05 12:53:25.004313 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.1 5ef50f941bee 4 days ago 1.49GB 2025-04-05 12:53:25.004327 | orchestrator | registry.osism.tech/kolla/cron 2024.1 d5b9b18eb6ca 4 days ago 274MB 2025-04-05 12:53:25.004341 | orchestrator | registry.osism.tech/kolla/fluentd 2024.1 a9d8718230d6 4 days ago 545MB 2025-04-05 12:53:25.004355 | orchestrator | registry.osism.tech/kolla/memcached 2024.1 e6b6086df176 4 days ago 275MB 2025-04-05 12:53:25.004369 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.1 678b77b35148 4 days ago 651MB 2025-04-05 12:53:25.004383 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.1 2b6eabad6d7e 4 days ago 331MB 2025-04-05 12:53:25.004397 | orchestrator | registry.osism.tech/kolla/proxysql 2024.1 853780474ae4 4 days ago 375MB 2025-04-05 12:53:25.004411 | orchestrator | registry.osism.tech/kolla/haproxy 2024.1 29c4334a7039 4 days ago 282MB 2025-04-05 12:53:25.004425 | orchestrator | registry.osism.tech/kolla/keepalived 2024.1 0348fd01ace8 4 days ago 285MB 2025-04-05 12:53:25.004439 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.1 cf471b2ecfd3 4 days ago 460MB 2025-04-05 12:53:25.004453 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.1 5be0490a15f7 4 days ago 280MB 2025-04-05 12:53:25.004467 | orchestrator | registry.osism.tech/kolla/redis 2024.1 e5764ab8d387 4 days ago 280MB 2025-04-05 12:53:25.004481 | orchestrator | registry.osism.tech/kolla/horizon 2024.1 22a4cf191073 4 days ago 1.08GB 2025-04-05 12:53:25.004495 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.1 efe4d8c5bf49 4 days ago 314MB 2025-04-05 12:53:25.004533 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.1 54a2c852ad42 4 days ago 310MB 2025-04-05 12:53:25.004548 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.1 472b60e2ea80 4 days ago 307MB 2025-04-05 12:53:25.004562 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.1 09793de22bcc 4 days ago 301MB 2025-04-05 12:53:25.004576 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.1 b565c6f7d7d6 4 days ago 366MB 2025-04-05 12:53:25.004591 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.1 1b19d9a09ef2 4 days ago 287MB 2025-04-05 12:53:25.004619 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.1 2ec3a13378aa 4 days ago 287MB 2025-04-05 12:53:25.004636 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.1 c3fdbd8cd48e 4 days ago 1.31GB 2025-04-05 12:53:25.004654 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.1 d186cc60d813 4 days ago 1.31GB 2025-04-05 12:53:25.004669 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.1 15372b022390 4 days ago 923MB 2025-04-05 12:53:25.004685 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.1 d2e18de4937c 4 days ago 923MB 2025-04-05 12:53:25.004702 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.1 c6936a571ba0 4 days ago 923MB 2025-04-05 12:53:25.004718 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.1 f22211939f7e 4 days ago 1.16GB 2025-04-05 12:53:25.004735 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.1 2fe19a1d51e7 4 days ago 1.05GB 2025-04-05 12:53:25.004753 | orchestrator | registry.osism.tech/kolla/placement-api 2024.1 3c28e8ae2695 4 days ago 909MB 2025-04-05 12:53:25.004769 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.1 05c64c5a6a36 4 days ago 959MB 2025-04-05 12:53:25.004785 | orchestrator | registry.osism.tech/kolla/keystone 2024.1 becb5db95fee 4 days ago 983MB 2025-04-05 12:53:25.004801 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.1 794484127164 4 days ago 962MB 2025-04-05 12:53:25.004816 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.1 af5b52a650f0 4 days ago 1.07GB 2025-04-05 12:53:25.004832 | orchestrator | registry.osism.tech/kolla/designate-central 2024.1 db4533ae1814 4 days ago 916MB 2025-04-05 12:53:25.004881 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.1 f936f53e1769 4 days ago 921MB 2025-04-05 12:53:25.210959 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.1 6c92840dca9b 4 days ago 916MB 2025-04-05 12:53:25.211032 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.1 b0ab6e091aec 4 days ago 916MB 2025-04-05 12:53:25.211047 | orchestrator | registry.osism.tech/kolla/designate-api 2024.1 dfb3b4388493 4 days ago 916MB 2025-04-05 12:53:25.211062 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.1 1c27702bb752 4 days ago 921MB 2025-04-05 12:53:25.211076 | orchestrator | registry.osism.tech/kolla/glance-api 2024.1 27cac3e9857d 4 days ago 1.01GB 2025-04-05 12:53:25.211091 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.1 b9f68b41f909 4 days ago 1.13GB 2025-04-05 12:53:25.211105 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.1 efa2e6324cd7 4 days ago 1.23GB 2025-04-05 12:53:25.211119 | orchestrator | registry.osism.tech/kolla/nova-api 2024.1 36690f91fcb5 4 days ago 1.13GB 2025-04-05 12:53:25.211134 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.1 198eb90961cc 4 days ago 1.13GB 2025-04-05 12:53:25.211168 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.1 9126da12c18d 4 days ago 802MB 2025-04-05 12:53:25.211183 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.1 ffad2b29a5f4 4 days ago 802MB 2025-04-05 12:53:25.211197 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.1 510c49fda4c2 4 days ago 802MB 2025-04-05 12:53:25.211211 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.1 068028106f4d 4 days ago 802MB 2025-04-05 12:53:25.211239 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-04-05 12:53:25.217712 | orchestrator | + set -e 2025-04-05 12:53:25.218742 | orchestrator | + source /opt/manager-vars.sh 2025-04-05 12:53:25.218848 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-05 12:53:25.229139 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-05 12:53:25.229166 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-05 12:53:25.229181 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-05 12:53:25.229195 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-05 12:53:25.229211 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-05 12:53:25.229225 | orchestrator | ++ export MANAGER_VERSION=latest 2025-04-05 12:53:25.229317 | orchestrator | ++ MANAGER_VERSION=latest 2025-04-05 12:53:25.229333 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-05 12:53:25.229347 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-05 12:53:25.229362 | orchestrator | ++ export ARA=false 2025-04-05 12:53:25.229376 | orchestrator | ++ ARA=false 2025-04-05 12:53:25.229390 | orchestrator | ++ export TEMPEST=false 2025-04-05 12:53:25.229404 | orchestrator | ++ TEMPEST=false 2025-04-05 12:53:25.229418 | orchestrator | ++ export IS_ZUUL=true 2025-04-05 12:53:25.229432 | orchestrator | ++ IS_ZUUL=true 2025-04-05 12:53:25.229447 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-04-05 12:53:25.229461 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-04-05 12:53:25.229475 | orchestrator | ++ export EXTERNAL_API=false 2025-04-05 12:53:25.229489 | orchestrator | ++ EXTERNAL_API=false 2025-04-05 12:53:25.229503 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-05 12:53:25.229517 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-05 12:53:25.229531 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-05 12:53:25.229546 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-05 12:53:25.229560 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-05 12:53:25.229589 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-05 12:53:25.229604 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-04-05 12:53:25.229623 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-04-05 12:53:25.229644 | orchestrator | + set -e 2025-04-05 12:53:25.230437 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-05 12:53:25.230464 | orchestrator | ++ export INTERACTIVE=false 2025-04-05 12:53:25.230479 | orchestrator | ++ INTERACTIVE=false 2025-04-05 12:53:25.230496 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-05 12:53:25.230511 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-05 12:53:25.230526 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-04-05 12:53:25.230553 | orchestrator | +++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2025-04-05 12:53:25.261959 | orchestrator | 2025-04-05 12:53:25.757293 | orchestrator | # Ceph status 2025-04-05 12:53:25.757366 | orchestrator | 2025-04-05 12:53:25.757381 | orchestrator | ++ export MANAGER_VERSION=latest 2025-04-05 12:53:25.757412 | orchestrator | ++ MANAGER_VERSION=latest 2025-04-05 12:53:25.757430 | orchestrator | + echo 2025-04-05 12:53:25.757446 | orchestrator | + echo '# Ceph status' 2025-04-05 12:53:25.757461 | orchestrator | + echo 2025-04-05 12:53:25.757476 | orchestrator | + ceph -s 2025-04-05 12:53:25.757505 | orchestrator | cluster: 2025-04-05 12:53:25.786216 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-04-05 12:53:25.786246 | orchestrator | health: HEALTH_OK 2025-04-05 12:53:25.786262 | orchestrator | 2025-04-05 12:53:25.786278 | orchestrator | services: 2025-04-05 12:53:25.786293 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 26m) 2025-04-05 12:53:25.786310 | orchestrator | mgr: testbed-node-2(active, since 15m), standbys: testbed-node-1, testbed-node-0 2025-04-05 12:53:25.786326 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-04-05 12:53:25.786341 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 24m) 2025-04-05 12:53:25.786357 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-04-05 12:53:25.786372 | orchestrator | 2025-04-05 12:53:25.786388 | orchestrator | data: 2025-04-05 12:53:25.786403 | orchestrator | volumes: 1/1 healthy 2025-04-05 12:53:25.786438 | orchestrator | pools: 14 pools, 401 pgs 2025-04-05 12:53:25.786454 | orchestrator | objects: 519 objects, 2.2 GiB 2025-04-05 12:53:25.786469 | orchestrator | usage: 8.4 GiB used, 111 GiB / 120 GiB avail 2025-04-05 12:53:25.786484 | orchestrator | pgs: 401 active+clean 2025-04-05 12:53:25.786499 | orchestrator | 2025-04-05 12:53:25.786515 | orchestrator | io: 2025-04-05 12:53:25.786530 | orchestrator | client: 8.7 KiB/s rd, 0 B/s wr, 8 op/s rd, 5 op/s wr 2025-04-05 12:53:25.786546 | orchestrator | 2025-04-05 12:53:25.786567 | orchestrator | 2025-04-05 12:53:26.322616 | orchestrator | # Ceph versions 2025-04-05 12:53:26.322688 | orchestrator | 2025-04-05 12:53:26.322704 | orchestrator | + echo 2025-04-05 12:53:26.322718 | orchestrator | + echo '# Ceph versions' 2025-04-05 12:53:26.322734 | orchestrator | + echo 2025-04-05 12:53:26.322748 | orchestrator | + ceph versions 2025-04-05 12:53:26.322775 | orchestrator | { 2025-04-05 12:53:26.349405 | orchestrator | "mon": { 2025-04-05 12:53:26.349435 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 3 2025-04-05 12:53:26.349451 | orchestrator | }, 2025-04-05 12:53:26.349466 | orchestrator | "mgr": { 2025-04-05 12:53:26.349480 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 3 2025-04-05 12:53:26.349495 | orchestrator | }, 2025-04-05 12:53:26.349509 | orchestrator | "osd": { 2025-04-05 12:53:26.349523 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 6 2025-04-05 12:53:26.349537 | orchestrator | }, 2025-04-05 12:53:26.349551 | orchestrator | "mds": { 2025-04-05 12:53:26.349565 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 3 2025-04-05 12:53:26.349579 | orchestrator | }, 2025-04-05 12:53:26.349593 | orchestrator | "rgw": { 2025-04-05 12:53:26.349607 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 3 2025-04-05 12:53:26.349622 | orchestrator | }, 2025-04-05 12:53:26.349635 | orchestrator | "overall": { 2025-04-05 12:53:26.349650 | orchestrator | "ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)": 18 2025-04-05 12:53:26.349664 | orchestrator | } 2025-04-05 12:53:26.349678 | orchestrator | } 2025-04-05 12:53:26.349698 | orchestrator | 2025-04-05 12:53:26.803088 | orchestrator | # Ceph OSD tree 2025-04-05 12:53:26.803173 | orchestrator | 2025-04-05 12:53:26.803189 | orchestrator | + echo 2025-04-05 12:53:26.803203 | orchestrator | + echo '# Ceph OSD tree' 2025-04-05 12:53:26.803218 | orchestrator | + echo 2025-04-05 12:53:26.803232 | orchestrator | + ceph osd df tree 2025-04-05 12:53:26.803261 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-04-05 12:53:26.829933 | orchestrator | -1 0.11691 - 120 GiB 8.4 GiB 6.7 GiB 0 B 1.7 GiB 111 GiB 7.02 1.00 - root default 2025-04-05 12:53:26.829963 | orchestrator | -5 0.03897 - 40 GiB 2.8 GiB 2.2 GiB 0 B 596 MiB 37 GiB 7.02 1.00 - host testbed-node-3 2025-04-05 12:53:26.829977 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.2 GiB 963 MiB 0 B 298 MiB 19 GiB 6.16 0.88 174 up osd.0 2025-04-05 12:53:26.829993 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.3 GiB 0 B 298 MiB 18 GiB 7.89 1.12 218 up osd.3 2025-04-05 12:53:26.830007 | orchestrator | -3 0.03897 - 40 GiB 2.8 GiB 2.2 GiB 0 B 596 MiB 37 GiB 7.02 1.00 - host testbed-node-4 2025-04-05 12:53:26.830065 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.2 GiB 0 B 294 MiB 18 GiB 7.59 1.08 209 up osd.1 2025-04-05 12:53:26.830081 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1019 MiB 0 B 302 MiB 19 GiB 6.46 0.92 181 up osd.5 2025-04-05 12:53:26.830095 | orchestrator | -7 0.03897 - 40 GiB 2.8 GiB 2.2 GiB 0 B 596 MiB 37 GiB 7.02 1.00 - host testbed-node-5 2025-04-05 12:53:26.830109 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.7 GiB 1.4 GiB 0 B 298 MiB 18 GiB 8.28 1.18 198 up osd.2 2025-04-05 12:53:26.830123 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 883 MiB 0 B 298 MiB 19 GiB 5.77 0.82 190 up osd.4 2025-04-05 12:53:26.830137 | orchestrator | TOTAL 120 GiB 8.4 GiB 6.7 GiB 0 B 1.7 GiB 111 GiB 7.02 2025-04-05 12:53:26.830173 | orchestrator | MIN/MAX VAR: 0.82/1.18 STDDEV: 0.94 2025-04-05 12:53:26.830213 | orchestrator | 2025-04-05 12:53:27.340157 | orchestrator | # Ceph monitor status 2025-04-05 12:53:27.340244 | orchestrator | 2025-04-05 12:53:27.340261 | orchestrator | + echo 2025-04-05 12:53:27.340277 | orchestrator | + echo '# Ceph monitor status' 2025-04-05 12:53:27.340291 | orchestrator | + echo 2025-04-05 12:53:27.340305 | orchestrator | + ceph mon stat 2025-04-05 12:53:27.340346 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {1}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-04-05 12:53:27.365198 | orchestrator | 2025-04-05 12:53:27.365528 | orchestrator | # Ceph quorum status 2025-04-05 12:53:27.365555 | orchestrator | 2025-04-05 12:53:27.365569 | orchestrator | + echo 2025-04-05 12:53:27.365584 | orchestrator | + echo '# Ceph quorum status' 2025-04-05 12:53:27.365598 | orchestrator | + echo 2025-04-05 12:53:27.365618 | orchestrator | + ceph quorum_status 2025-04-05 12:53:27.365750 | orchestrator | + jq 2025-04-05 12:53:27.909918 | orchestrator | { 2025-04-05 12:53:27.910195 | orchestrator | "election_epoch": 6, 2025-04-05 12:53:27.910217 | orchestrator | "quorum": [ 2025-04-05 12:53:27.910233 | orchestrator | 0, 2025-04-05 12:53:27.910247 | orchestrator | 1, 2025-04-05 12:53:27.910261 | orchestrator | 2 2025-04-05 12:53:27.910274 | orchestrator | ], 2025-04-05 12:53:27.910288 | orchestrator | "quorum_names": [ 2025-04-05 12:53:27.910314 | orchestrator | "testbed-node-0", 2025-04-05 12:53:27.910328 | orchestrator | "testbed-node-1", 2025-04-05 12:53:27.910342 | orchestrator | "testbed-node-2" 2025-04-05 12:53:27.910360 | orchestrator | ], 2025-04-05 12:53:27.910374 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-04-05 12:53:27.910390 | orchestrator | "quorum_age": 1590, 2025-04-05 12:53:27.910404 | orchestrator | "features": { 2025-04-05 12:53:27.910418 | orchestrator | "quorum_con": "4540138320759226367", 2025-04-05 12:53:27.910432 | orchestrator | "quorum_mon": [ 2025-04-05 12:53:27.910446 | orchestrator | "kraken", 2025-04-05 12:53:27.910459 | orchestrator | "luminous", 2025-04-05 12:53:27.910473 | orchestrator | "mimic", 2025-04-05 12:53:27.910487 | orchestrator | "osdmap-prune", 2025-04-05 12:53:27.910501 | orchestrator | "nautilus", 2025-04-05 12:53:27.910515 | orchestrator | "octopus", 2025-04-05 12:53:27.910529 | orchestrator | "pacific", 2025-04-05 12:53:27.910542 | orchestrator | "elector-pinging", 2025-04-05 12:53:27.910556 | orchestrator | "quincy" 2025-04-05 12:53:27.910570 | orchestrator | ] 2025-04-05 12:53:27.910584 | orchestrator | }, 2025-04-05 12:53:27.910598 | orchestrator | "monmap": { 2025-04-05 12:53:27.910612 | orchestrator | "epoch": 1, 2025-04-05 12:53:27.910625 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-04-05 12:53:27.910639 | orchestrator | "modified": "2025-04-05T12:26:17.641430Z", 2025-04-05 12:53:27.910654 | orchestrator | "created": "2025-04-05T12:26:17.641430Z", 2025-04-05 12:53:27.910668 | orchestrator | "min_mon_release": 17, 2025-04-05 12:53:27.910684 | orchestrator | "min_mon_release_name": "quincy", 2025-04-05 12:53:27.910700 | orchestrator | "election_strategy": 1, 2025-04-05 12:53:27.910715 | orchestrator | "disallowed_leaders: ": "", 2025-04-05 12:53:27.910731 | orchestrator | "stretch_mode": false, 2025-04-05 12:53:27.910747 | orchestrator | "tiebreaker_mon": "", 2025-04-05 12:53:27.910762 | orchestrator | "removed_ranks: ": "1", 2025-04-05 12:53:27.910778 | orchestrator | "features": { 2025-04-05 12:53:27.910793 | orchestrator | "persistent": [ 2025-04-05 12:53:27.910808 | orchestrator | "kraken", 2025-04-05 12:53:27.910823 | orchestrator | "luminous", 2025-04-05 12:53:27.910839 | orchestrator | "mimic", 2025-04-05 12:53:27.910876 | orchestrator | "osdmap-prune", 2025-04-05 12:53:27.910893 | orchestrator | "nautilus", 2025-04-05 12:53:27.910908 | orchestrator | "octopus", 2025-04-05 12:53:27.910923 | orchestrator | "pacific", 2025-04-05 12:53:27.910938 | orchestrator | "elector-pinging", 2025-04-05 12:53:27.910953 | orchestrator | "quincy" 2025-04-05 12:53:27.910969 | orchestrator | ], 2025-04-05 12:53:27.910984 | orchestrator | "optional": [] 2025-04-05 12:53:27.910999 | orchestrator | }, 2025-04-05 12:53:27.911015 | orchestrator | "mons": [ 2025-04-05 12:53:27.911032 | orchestrator | { 2025-04-05 12:53:27.911046 | orchestrator | "rank": 0, 2025-04-05 12:53:27.911060 | orchestrator | "name": "testbed-node-0", 2025-04-05 12:53:27.911073 | orchestrator | "public_addrs": { 2025-04-05 12:53:27.911087 | orchestrator | "addrvec": [ 2025-04-05 12:53:27.911137 | orchestrator | { 2025-04-05 12:53:27.911151 | orchestrator | "type": "v2", 2025-04-05 12:53:27.911165 | orchestrator | "addr": "192.168.16.10:3300", 2025-04-05 12:53:27.911180 | orchestrator | "nonce": 0 2025-04-05 12:53:27.911197 | orchestrator | }, 2025-04-05 12:53:27.911211 | orchestrator | { 2025-04-05 12:53:27.911225 | orchestrator | "type": "v1", 2025-04-05 12:53:27.911239 | orchestrator | "addr": "192.168.16.10:6789", 2025-04-05 12:53:27.911253 | orchestrator | "nonce": 0 2025-04-05 12:53:27.911267 | orchestrator | } 2025-04-05 12:53:27.911280 | orchestrator | ] 2025-04-05 12:53:27.911295 | orchestrator | }, 2025-04-05 12:53:27.911309 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-04-05 12:53:27.911322 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-04-05 12:53:27.911337 | orchestrator | "priority": 0, 2025-04-05 12:53:27.911350 | orchestrator | "weight": 0, 2025-04-05 12:53:27.911368 | orchestrator | "crush_location": "{}" 2025-04-05 12:53:27.911382 | orchestrator | }, 2025-04-05 12:53:27.911396 | orchestrator | { 2025-04-05 12:53:27.911410 | orchestrator | "rank": 1, 2025-04-05 12:53:27.911424 | orchestrator | "name": "testbed-node-1", 2025-04-05 12:53:27.911438 | orchestrator | "public_addrs": { 2025-04-05 12:53:27.911451 | orchestrator | "addrvec": [ 2025-04-05 12:53:27.911465 | orchestrator | { 2025-04-05 12:53:27.911479 | orchestrator | "type": "v2", 2025-04-05 12:53:27.911492 | orchestrator | "addr": "192.168.16.11:3300", 2025-04-05 12:53:27.911507 | orchestrator | "nonce": 0 2025-04-05 12:53:27.911520 | orchestrator | }, 2025-04-05 12:53:27.911534 | orchestrator | { 2025-04-05 12:53:27.911548 | orchestrator | "type": "v1", 2025-04-05 12:53:27.911561 | orchestrator | "addr": "192.168.16.11:6789", 2025-04-05 12:53:27.911576 | orchestrator | "nonce": 0 2025-04-05 12:53:27.911589 | orchestrator | } 2025-04-05 12:53:27.911603 | orchestrator | ] 2025-04-05 12:53:27.911617 | orchestrator | }, 2025-04-05 12:53:27.911631 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-04-05 12:53:27.911644 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-04-05 12:53:27.911658 | orchestrator | "priority": 0, 2025-04-05 12:53:27.911672 | orchestrator | "weight": 0, 2025-04-05 12:53:27.911686 | orchestrator | "crush_location": "{}" 2025-04-05 12:53:27.911699 | orchestrator | }, 2025-04-05 12:53:27.911713 | orchestrator | { 2025-04-05 12:53:27.911727 | orchestrator | "rank": 2, 2025-04-05 12:53:27.911741 | orchestrator | "name": "testbed-node-2", 2025-04-05 12:53:27.911755 | orchestrator | "public_addrs": { 2025-04-05 12:53:27.911768 | orchestrator | "addrvec": [ 2025-04-05 12:53:27.911782 | orchestrator | { 2025-04-05 12:53:27.911796 | orchestrator | "type": "v2", 2025-04-05 12:53:27.911809 | orchestrator | "addr": "192.168.16.12:3300", 2025-04-05 12:53:27.911823 | orchestrator | "nonce": 0 2025-04-05 12:53:27.911837 | orchestrator | }, 2025-04-05 12:53:27.911880 | orchestrator | { 2025-04-05 12:53:27.911896 | orchestrator | "type": "v1", 2025-04-05 12:53:27.911910 | orchestrator | "addr": "192.168.16.12:6789", 2025-04-05 12:53:27.911925 | orchestrator | "nonce": 0 2025-04-05 12:53:27.911938 | orchestrator | } 2025-04-05 12:53:27.911952 | orchestrator | ] 2025-04-05 12:53:27.911966 | orchestrator | }, 2025-04-05 12:53:27.911979 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-04-05 12:53:27.911993 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-04-05 12:53:27.912007 | orchestrator | "priority": 0, 2025-04-05 12:53:27.912021 | orchestrator | "weight": 0, 2025-04-05 12:53:27.912035 | orchestrator | "crush_location": "{}" 2025-04-05 12:53:27.912049 | orchestrator | } 2025-04-05 12:53:27.912063 | orchestrator | ] 2025-04-05 12:53:27.912076 | orchestrator | } 2025-04-05 12:53:27.912091 | orchestrator | } 2025-04-05 12:53:27.912112 | orchestrator | 2025-04-05 12:53:28.445556 | orchestrator | # Ceph free space status 2025-04-05 12:53:28.446370 | orchestrator | 2025-04-05 12:53:28.446405 | orchestrator | + echo 2025-04-05 12:53:28.446422 | orchestrator | + echo '# Ceph free space status' 2025-04-05 12:53:28.446438 | orchestrator | + echo 2025-04-05 12:53:28.446453 | orchestrator | + ceph df 2025-04-05 12:53:28.446487 | orchestrator | --- RAW STORAGE --- 2025-04-05 12:53:28.478082 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-04-05 12:53:28.478124 | orchestrator | hdd 120 GiB 111 GiB 8.4 GiB 8.4 GiB 7.02 2025-04-05 12:53:28.478140 | orchestrator | TOTAL 120 GiB 111 GiB 8.4 GiB 8.4 GiB 7.02 2025-04-05 12:53:28.478176 | orchestrator | 2025-04-05 12:53:28.478191 | orchestrator | --- POOLS --- 2025-04-05 12:53:28.478205 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-04-05 12:53:28.478221 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2025-04-05 12:53:28.478235 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-04-05 12:53:28.478249 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-04-05 12:53:28.478264 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-04-05 12:53:28.478278 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-04-05 12:53:28.478293 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-04-05 12:53:28.478307 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-04-05 12:53:28.478321 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-04-05 12:53:28.478335 | orchestrator | .rgw.root 9 32 3.7 KiB 8 64 KiB 0 52 GiB 2025-04-05 12:53:28.478349 | orchestrator | backups 10 32 19 B 1 12 KiB 0 35 GiB 2025-04-05 12:53:28.478363 | orchestrator | volumes 11 32 19 B 1 12 KiB 0 35 GiB 2025-04-05 12:53:28.478381 | orchestrator | images 12 32 2.2 GiB 298 6.7 GiB 6.02 35 GiB 2025-04-05 12:53:28.478408 | orchestrator | metrics 13 32 19 B 1 12 KiB 0 35 GiB 2025-04-05 12:53:28.478423 | orchestrator | vms 14 32 19 B 1 12 KiB 0 35 GiB 2025-04-05 12:53:28.478447 | orchestrator | ++ semver latest 5.0.0 2025-04-05 12:53:28.525590 | orchestrator | + [[ -1 -eq -1 ]] 2025-04-05 12:53:30.109433 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-04-05 12:53:30.109536 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-04-05 12:53:30.109552 | orchestrator | + osism apply facts 2025-04-05 12:53:30.109583 | orchestrator | 2025-04-05 12:53:30 | INFO  | Task b392eb99-f9f6-41ed-9c68-a02d297e2329 (facts) was prepared for execution. 2025-04-05 12:53:34.256930 | orchestrator | 2025-04-05 12:53:30 | INFO  | It takes a moment until task b392eb99-f9f6-41ed-9c68-a02d297e2329 (facts) has been started and output is visible here. 2025-04-05 12:53:34.257025 | orchestrator | 2025-04-05 12:53:34.260969 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-04-05 12:53:35.680736 | orchestrator | 2025-04-05 12:53:35.680811 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-05 12:53:35.680828 | orchestrator | Saturday 05 April 2025 12:53:34 +0000 (0:00:00.283) 0:00:00.283 ******** 2025-04-05 12:53:35.680894 | orchestrator | ok: [testbed-manager] 2025-04-05 12:53:35.681599 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:53:35.681631 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:53:35.683097 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:53:35.683771 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:53:35.685932 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:53:35.686441 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:53:35.687297 | orchestrator | 2025-04-05 12:53:35.688410 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-05 12:53:35.688845 | orchestrator | Saturday 05 April 2025 12:53:35 +0000 (0:00:01.431) 0:00:01.715 ******** 2025-04-05 12:53:35.854457 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:53:35.937315 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:53:36.018821 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:53:36.096056 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:53:36.172795 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:53:36.886614 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:53:36.887966 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:53:36.888976 | orchestrator | 2025-04-05 12:53:36.889677 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-05 12:53:36.890814 | orchestrator | 2025-04-05 12:53:36.891951 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-05 12:53:36.892892 | orchestrator | Saturday 05 April 2025 12:53:36 +0000 (0:00:01.209) 0:00:02.924 ******** 2025-04-05 12:53:41.341994 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:53:41.343038 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:53:41.344034 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:53:41.347936 | orchestrator | ok: [testbed-manager] 2025-04-05 12:53:41.348600 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:53:41.348626 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:53:41.348643 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:53:41.348664 | orchestrator | 2025-04-05 12:53:41.349906 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-05 12:53:41.350651 | orchestrator | 2025-04-05 12:53:41.351635 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-05 12:53:41.352449 | orchestrator | Saturday 05 April 2025 12:53:41 +0000 (0:00:04.456) 0:00:07.381 ******** 2025-04-05 12:53:41.510800 | orchestrator | skipping: [testbed-manager] 2025-04-05 12:53:41.599841 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:53:41.691444 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:53:41.772623 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:53:41.854368 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:53:41.902275 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:53:41.902989 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:53:41.903506 | orchestrator | 2025-04-05 12:53:41.904363 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:53:41.905087 | orchestrator | 2025-04-05 12:53:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:53:41.905627 | orchestrator | 2025-04-05 12:53:41 | INFO  | Please wait and do not abort execution. 2025-04-05 12:53:41.905657 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:53:41.906118 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:53:41.906985 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:53:41.907078 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:53:41.907790 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:53:41.908089 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:53:41.908388 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:53:41.908776 | orchestrator | 2025-04-05 12:53:41.909118 | orchestrator | 2025-04-05 12:53:41.909341 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:53:41.909663 | orchestrator | Saturday 05 April 2025 12:53:41 +0000 (0:00:00.561) 0:00:07.942 ******** 2025-04-05 12:53:41.910077 | orchestrator | =============================================================================== 2025-04-05 12:53:41.910362 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.46s 2025-04-05 12:53:41.910635 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.43s 2025-04-05 12:53:41.910911 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.21s 2025-04-05 12:53:41.911388 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2025-04-05 12:53:42.656661 | orchestrator | + osism validate ceph-mons 2025-04-05 12:54:02.097519 | orchestrator | 2025-04-05 12:54:02.097727 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-04-05 12:54:02.097758 | orchestrator | 2025-04-05 12:54:02.097775 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-04-05 12:54:02.097790 | orchestrator | Saturday 05 April 2025 12:53:48 +0000 (0:00:00.397) 0:00:00.397 ******** 2025-04-05 12:54:02.097804 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:02.097818 | orchestrator | 2025-04-05 12:54:02.097833 | orchestrator | TASK [Create report output directory] ****************************************** 2025-04-05 12:54:02.097847 | orchestrator | Saturday 05 April 2025 12:53:48 +0000 (0:00:00.592) 0:00:00.990 ******** 2025-04-05 12:54:02.097897 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:02.097913 | orchestrator | 2025-04-05 12:54:02.097928 | orchestrator | TASK [Define report vars] ****************************************************** 2025-04-05 12:54:02.097941 | orchestrator | Saturday 05 April 2025 12:53:49 +0000 (0:00:00.705) 0:00:01.695 ******** 2025-04-05 12:54:02.097956 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:02.097971 | orchestrator | 2025-04-05 12:54:02.097985 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-04-05 12:54:02.097999 | orchestrator | Saturday 05 April 2025 12:53:49 +0000 (0:00:00.205) 0:00:01.901 ******** 2025-04-05 12:54:02.098013 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:02.098079 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:54:02.098096 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:54:02.098113 | orchestrator | 2025-04-05 12:54:02.098129 | orchestrator | TASK [Get container info] ****************************************************** 2025-04-05 12:54:02.098160 | orchestrator | Saturday 05 April 2025 12:53:49 +0000 (0:00:00.318) 0:00:02.219 ******** 2025-04-05 12:54:02.098180 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:54:02.098196 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:54:02.098212 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:02.098229 | orchestrator | 2025-04-05 12:54:02.098247 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-04-05 12:54:02.098264 | orchestrator | Saturday 05 April 2025 12:53:50 +0000 (0:00:00.958) 0:00:03.178 ******** 2025-04-05 12:54:02.098280 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:02.098297 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:54:02.098313 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:54:02.098329 | orchestrator | 2025-04-05 12:54:02.098345 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-04-05 12:54:02.098361 | orchestrator | Saturday 05 April 2025 12:53:51 +0000 (0:00:00.266) 0:00:03.444 ******** 2025-04-05 12:54:02.098377 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:02.098393 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:54:02.098407 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:54:02.098421 | orchestrator | 2025-04-05 12:54:02.098435 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-04-05 12:54:02.098449 | orchestrator | Saturday 05 April 2025 12:53:51 +0000 (0:00:00.388) 0:00:03.832 ******** 2025-04-05 12:54:02.098463 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:02.098477 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:54:02.098491 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:54:02.098504 | orchestrator | 2025-04-05 12:54:02.098518 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-04-05 12:54:02.098532 | orchestrator | Saturday 05 April 2025 12:53:51 +0000 (0:00:00.277) 0:00:04.110 ******** 2025-04-05 12:54:02.098546 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:02.098560 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:54:02.098574 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:54:02.098588 | orchestrator | 2025-04-05 12:54:02.098602 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-04-05 12:54:02.098616 | orchestrator | Saturday 05 April 2025 12:53:52 +0000 (0:00:00.267) 0:00:04.377 ******** 2025-04-05 12:54:02.098630 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:02.098643 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:54:02.098668 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:54:02.098683 | orchestrator | 2025-04-05 12:54:02.098697 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-04-05 12:54:02.098711 | orchestrator | Saturday 05 April 2025 12:53:52 +0000 (0:00:00.272) 0:00:04.650 ******** 2025-04-05 12:54:02.098725 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:02.098738 | orchestrator | 2025-04-05 12:54:02.098752 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-04-05 12:54:02.098766 | orchestrator | Saturday 05 April 2025 12:53:52 +0000 (0:00:00.518) 0:00:05.168 ******** 2025-04-05 12:54:02.098780 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:02.098794 | orchestrator | 2025-04-05 12:54:02.098808 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-04-05 12:54:02.098822 | orchestrator | Saturday 05 April 2025 12:53:53 +0000 (0:00:00.219) 0:00:05.388 ******** 2025-04-05 12:54:02.098836 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:02.098850 | orchestrator | 2025-04-05 12:54:02.098885 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-04-05 12:54:02.098899 | orchestrator | Saturday 05 April 2025 12:53:53 +0000 (0:00:00.215) 0:00:05.604 ******** 2025-04-05 12:54:02.098914 | orchestrator | 2025-04-05 12:54:02.098928 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-04-05 12:54:02.098941 | orchestrator | Saturday 05 April 2025 12:53:53 +0000 (0:00:00.063) 0:00:05.668 ******** 2025-04-05 12:54:02.098955 | orchestrator | 2025-04-05 12:54:02.098970 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-04-05 12:54:02.098984 | orchestrator | Saturday 05 April 2025 12:53:53 +0000 (0:00:00.062) 0:00:05.730 ******** 2025-04-05 12:54:02.098998 | orchestrator | 2025-04-05 12:54:02.099011 | orchestrator | TASK [Print report file information] ******************************************* 2025-04-05 12:54:02.099025 | orchestrator | Saturday 05 April 2025 12:53:53 +0000 (0:00:00.065) 0:00:05.796 ******** 2025-04-05 12:54:02.099040 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:02.099053 | orchestrator | 2025-04-05 12:54:02.099067 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-04-05 12:54:02.099081 | orchestrator | Saturday 05 April 2025 12:53:53 +0000 (0:00:00.228) 0:00:06.025 ******** 2025-04-05 12:54:02.099095 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:02.099109 | orchestrator | 2025-04-05 12:54:02.099137 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-04-05 12:54:02.099153 | orchestrator | Saturday 05 April 2025 12:53:53 +0000 (0:00:00.208) 0:00:06.234 ******** 2025-04-05 12:54:02.099167 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:02.099181 | orchestrator | 2025-04-05 12:54:02.099195 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-04-05 12:54:02.099208 | orchestrator | Saturday 05 April 2025 12:53:54 +0000 (0:00:00.106) 0:00:06.340 ******** 2025-04-05 12:54:02.099222 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:54:02.099236 | orchestrator | 2025-04-05 12:54:02.099250 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-04-05 12:54:02.099270 | orchestrator | Saturday 05 April 2025 12:53:55 +0000 (0:00:01.586) 0:00:07.926 ******** 2025-04-05 12:54:02.099285 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:02.099299 | orchestrator | 2025-04-05 12:54:02.099313 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-04-05 12:54:02.099327 | orchestrator | Saturday 05 April 2025 12:53:55 +0000 (0:00:00.286) 0:00:08.212 ******** 2025-04-05 12:54:02.099341 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:02.099355 | orchestrator | 2025-04-05 12:54:02.099369 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-04-05 12:54:02.099383 | orchestrator | Saturday 05 April 2025 12:53:56 +0000 (0:00:00.315) 0:00:08.528 ******** 2025-04-05 12:54:02.099397 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:02.099411 | orchestrator | 2025-04-05 12:54:02.099425 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-04-05 12:54:02.099446 | orchestrator | Saturday 05 April 2025 12:53:56 +0000 (0:00:00.226) 0:00:08.754 ******** 2025-04-05 12:54:02.099460 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:02.099474 | orchestrator | 2025-04-05 12:54:02.099488 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-04-05 12:54:02.099502 | orchestrator | Saturday 05 April 2025 12:53:56 +0000 (0:00:00.204) 0:00:08.958 ******** 2025-04-05 12:54:02.099516 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:02.099530 | orchestrator | 2025-04-05 12:54:02.099544 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-04-05 12:54:02.099558 | orchestrator | Saturday 05 April 2025 12:53:56 +0000 (0:00:00.107) 0:00:09.065 ******** 2025-04-05 12:54:02.099572 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:02.099586 | orchestrator | 2025-04-05 12:54:02.099600 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-04-05 12:54:02.099614 | orchestrator | Saturday 05 April 2025 12:53:56 +0000 (0:00:00.138) 0:00:09.204 ******** 2025-04-05 12:54:02.099628 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:02.099642 | orchestrator | 2025-04-05 12:54:02.099656 | orchestrator | TASK [Gather status data] ****************************************************** 2025-04-05 12:54:02.099670 | orchestrator | Saturday 05 April 2025 12:53:57 +0000 (0:00:00.116) 0:00:09.321 ******** 2025-04-05 12:54:02.099684 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:54:02.099698 | orchestrator | 2025-04-05 12:54:02.099712 | orchestrator | TASK [Set health test data] **************************************************** 2025-04-05 12:54:02.099726 | orchestrator | Saturday 05 April 2025 12:53:58 +0000 (0:00:01.301) 0:00:10.623 ******** 2025-04-05 12:54:02.099740 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:02.099754 | orchestrator | 2025-04-05 12:54:02.099768 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-04-05 12:54:02.099782 | orchestrator | Saturday 05 April 2025 12:53:58 +0000 (0:00:00.198) 0:00:10.821 ******** 2025-04-05 12:54:02.099795 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:02.099809 | orchestrator | 2025-04-05 12:54:02.099823 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-04-05 12:54:02.099838 | orchestrator | Saturday 05 April 2025 12:53:58 +0000 (0:00:00.102) 0:00:10.923 ******** 2025-04-05 12:54:02.099868 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:02.099884 | orchestrator | 2025-04-05 12:54:02.099898 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-04-05 12:54:02.099912 | orchestrator | Saturday 05 April 2025 12:53:58 +0000 (0:00:00.127) 0:00:11.050 ******** 2025-04-05 12:54:02.099926 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:02.099940 | orchestrator | 2025-04-05 12:54:02.099954 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-04-05 12:54:02.099968 | orchestrator | Saturday 05 April 2025 12:53:58 +0000 (0:00:00.113) 0:00:11.164 ******** 2025-04-05 12:54:02.099982 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:02.099996 | orchestrator | 2025-04-05 12:54:02.100010 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-04-05 12:54:02.100024 | orchestrator | Saturday 05 April 2025 12:53:59 +0000 (0:00:00.311) 0:00:11.476 ******** 2025-04-05 12:54:02.100039 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:02.100053 | orchestrator | 2025-04-05 12:54:02.100067 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-04-05 12:54:02.100081 | orchestrator | Saturday 05 April 2025 12:53:59 +0000 (0:00:00.236) 0:00:11.712 ******** 2025-04-05 12:54:02.100095 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:02.100109 | orchestrator | 2025-04-05 12:54:02.100123 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-04-05 12:54:02.100137 | orchestrator | Saturday 05 April 2025 12:53:59 +0000 (0:00:00.250) 0:00:11.963 ******** 2025-04-05 12:54:02.100151 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:02.100165 | orchestrator | 2025-04-05 12:54:02.100179 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-04-05 12:54:02.100205 | orchestrator | Saturday 05 April 2025 12:54:01 +0000 (0:00:01.646) 0:00:13.610 ******** 2025-04-05 12:54:02.100220 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:02.100234 | orchestrator | 2025-04-05 12:54:02.100249 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-04-05 12:54:02.100292 | orchestrator | Saturday 05 April 2025 12:54:01 +0000 (0:00:00.248) 0:00:13.858 ******** 2025-04-05 12:54:02.100307 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:02.100321 | orchestrator | 2025-04-05 12:54:02.100342 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-04-05 12:54:04.155162 | orchestrator | Saturday 05 April 2025 12:54:01 +0000 (0:00:00.259) 0:00:14.117 ******** 2025-04-05 12:54:04.357421 | orchestrator | 2025-04-05 12:54:04.357505 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-04-05 12:54:04.357522 | orchestrator | Saturday 05 April 2025 12:54:01 +0000 (0:00:00.082) 0:00:14.200 ******** 2025-04-05 12:54:04.357536 | orchestrator | 2025-04-05 12:54:04.357551 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-04-05 12:54:04.357565 | orchestrator | Saturday 05 April 2025 12:54:02 +0000 (0:00:00.071) 0:00:14.271 ******** 2025-04-05 12:54:04.357578 | orchestrator | 2025-04-05 12:54:04.357593 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-04-05 12:54:04.357606 | orchestrator | Saturday 05 April 2025 12:54:02 +0000 (0:00:00.072) 0:00:14.344 ******** 2025-04-05 12:54:04.357621 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:04.357636 | orchestrator | 2025-04-05 12:54:04.357650 | orchestrator | TASK [Print report file information] ******************************************* 2025-04-05 12:54:04.357667 | orchestrator | Saturday 05 April 2025 12:54:03 +0000 (0:00:01.249) 0:00:15.593 ******** 2025-04-05 12:54:04.357681 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-04-05 12:54:04.357695 | orchestrator |  "msg": [ 2025-04-05 12:54:04.357711 | orchestrator |  "Validator run completed.", 2025-04-05 12:54:04.357727 | orchestrator |  "You can find the report file here:", 2025-04-05 12:54:04.357741 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-04-05T12:53:48+00:00-report.json", 2025-04-05 12:54:04.357756 | orchestrator |  "on the following host:", 2025-04-05 12:54:04.357770 | orchestrator |  "testbed-manager" 2025-04-05 12:54:04.357806 | orchestrator |  ] 2025-04-05 12:54:04.357821 | orchestrator | } 2025-04-05 12:54:04.357836 | orchestrator | 2025-04-05 12:54:04.357850 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:54:04.357909 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-04-05 12:54:04.357926 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:54:04.357940 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:54:04.357960 | orchestrator | 2025-04-05 12:54:04.357975 | orchestrator | 2025-04-05 12:54:04.357991 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:54:04.358008 | orchestrator | Saturday 05 April 2025 12:54:03 +0000 (0:00:00.546) 0:00:16.140 ******** 2025-04-05 12:54:04.358076 | orchestrator | =============================================================================== 2025-04-05 12:54:04.358093 | orchestrator | Aggregate test results step one ----------------------------------------- 1.65s 2025-04-05 12:54:04.358108 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.59s 2025-04-05 12:54:04.358124 | orchestrator | Gather status data ------------------------------------------------------ 1.30s 2025-04-05 12:54:04.358140 | orchestrator | Write report file ------------------------------------------------------- 1.25s 2025-04-05 12:54:04.358177 | orchestrator | Get container info ------------------------------------------------------ 0.96s 2025-04-05 12:54:04.358194 | orchestrator | Create report output directory ------------------------------------------ 0.71s 2025-04-05 12:54:04.358209 | orchestrator | Get timestamp for report file ------------------------------------------- 0.59s 2025-04-05 12:54:04.358226 | orchestrator | Print report file information ------------------------------------------- 0.55s 2025-04-05 12:54:04.358242 | orchestrator | Aggregate test results step one ----------------------------------------- 0.52s 2025-04-05 12:54:04.358258 | orchestrator | Set test result to passed if container is existing ---------------------- 0.39s 2025-04-05 12:54:04.358274 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2025-04-05 12:54:04.358290 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.32s 2025-04-05 12:54:04.358306 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.31s 2025-04-05 12:54:04.358322 | orchestrator | Set quorum test data ---------------------------------------------------- 0.29s 2025-04-05 12:54:04.358338 | orchestrator | Prepare test data ------------------------------------------------------- 0.28s 2025-04-05 12:54:04.358352 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.27s 2025-04-05 12:54:04.358373 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.27s 2025-04-05 12:54:04.358387 | orchestrator | Set test result to failed if container is missing ----------------------- 0.27s 2025-04-05 12:54:04.358401 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2025-04-05 12:54:04.358415 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.25s 2025-04-05 12:54:04.358444 | orchestrator | + osism validate ceph-mgrs 2025-04-05 12:54:22.288207 | orchestrator | 2025-04-05 12:54:22.288315 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-04-05 12:54:22.288335 | orchestrator | 2025-04-05 12:54:22.288351 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-04-05 12:54:22.288366 | orchestrator | Saturday 05 April 2025 12:54:09 +0000 (0:00:00.317) 0:00:00.317 ******** 2025-04-05 12:54:22.288382 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:22.288398 | orchestrator | 2025-04-05 12:54:22.288413 | orchestrator | TASK [Create report output directory] ****************************************** 2025-04-05 12:54:22.288428 | orchestrator | Saturday 05 April 2025 12:54:10 +0000 (0:00:00.535) 0:00:00.853 ******** 2025-04-05 12:54:22.288444 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:22.288459 | orchestrator | 2025-04-05 12:54:22.288474 | orchestrator | TASK [Define report vars] ****************************************************** 2025-04-05 12:54:22.288489 | orchestrator | Saturday 05 April 2025 12:54:11 +0000 (0:00:00.724) 0:00:01.577 ******** 2025-04-05 12:54:22.288505 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:22.288523 | orchestrator | 2025-04-05 12:54:22.288538 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-04-05 12:54:22.288553 | orchestrator | Saturday 05 April 2025 12:54:11 +0000 (0:00:00.177) 0:00:01.755 ******** 2025-04-05 12:54:22.288568 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:22.288584 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:54:22.288599 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:54:22.288614 | orchestrator | 2025-04-05 12:54:22.288629 | orchestrator | TASK [Get container info] ****************************************************** 2025-04-05 12:54:22.288644 | orchestrator | Saturday 05 April 2025 12:54:11 +0000 (0:00:00.280) 0:00:02.035 ******** 2025-04-05 12:54:22.288659 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:54:22.288674 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:54:22.288689 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:22.288704 | orchestrator | 2025-04-05 12:54:22.288719 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-04-05 12:54:22.288735 | orchestrator | Saturday 05 April 2025 12:54:12 +0000 (0:00:00.995) 0:00:03.031 ******** 2025-04-05 12:54:22.288776 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:22.288793 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:54:22.288810 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:54:22.288826 | orchestrator | 2025-04-05 12:54:22.288843 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-04-05 12:54:22.288883 | orchestrator | Saturday 05 April 2025 12:54:12 +0000 (0:00:00.258) 0:00:03.289 ******** 2025-04-05 12:54:22.288898 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:22.288912 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:54:22.288926 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:54:22.288940 | orchestrator | 2025-04-05 12:54:22.288953 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-04-05 12:54:22.288967 | orchestrator | Saturday 05 April 2025 12:54:13 +0000 (0:00:00.407) 0:00:03.697 ******** 2025-04-05 12:54:22.288981 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:22.289013 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:54:22.289028 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:54:22.289042 | orchestrator | 2025-04-05 12:54:22.289056 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-04-05 12:54:22.289069 | orchestrator | Saturday 05 April 2025 12:54:13 +0000 (0:00:00.278) 0:00:03.975 ******** 2025-04-05 12:54:22.289083 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:22.289097 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:54:22.289111 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:54:22.289125 | orchestrator | 2025-04-05 12:54:22.289138 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-04-05 12:54:22.289152 | orchestrator | Saturday 05 April 2025 12:54:13 +0000 (0:00:00.268) 0:00:04.244 ******** 2025-04-05 12:54:22.289165 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:22.289179 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:54:22.289193 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:54:22.289207 | orchestrator | 2025-04-05 12:54:22.289221 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-04-05 12:54:22.289235 | orchestrator | Saturday 05 April 2025 12:54:13 +0000 (0:00:00.276) 0:00:04.520 ******** 2025-04-05 12:54:22.289248 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:22.289262 | orchestrator | 2025-04-05 12:54:22.289276 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-04-05 12:54:22.289290 | orchestrator | Saturday 05 April 2025 12:54:14 +0000 (0:00:00.491) 0:00:05.012 ******** 2025-04-05 12:54:22.289303 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:22.289317 | orchestrator | 2025-04-05 12:54:22.289331 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-04-05 12:54:22.289350 | orchestrator | Saturday 05 April 2025 12:54:14 +0000 (0:00:00.212) 0:00:05.225 ******** 2025-04-05 12:54:22.289364 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:22.289378 | orchestrator | 2025-04-05 12:54:22.289392 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-04-05 12:54:22.289405 | orchestrator | Saturday 05 April 2025 12:54:14 +0000 (0:00:00.220) 0:00:05.445 ******** 2025-04-05 12:54:22.289419 | orchestrator | 2025-04-05 12:54:22.289433 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-04-05 12:54:22.289446 | orchestrator | Saturday 05 April 2025 12:54:14 +0000 (0:00:00.063) 0:00:05.509 ******** 2025-04-05 12:54:22.289460 | orchestrator | 2025-04-05 12:54:22.289474 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-04-05 12:54:22.289487 | orchestrator | Saturday 05 April 2025 12:54:15 +0000 (0:00:00.066) 0:00:05.576 ******** 2025-04-05 12:54:22.289501 | orchestrator | 2025-04-05 12:54:22.289515 | orchestrator | TASK [Print report file information] ******************************************* 2025-04-05 12:54:22.289528 | orchestrator | Saturday 05 April 2025 12:54:15 +0000 (0:00:00.067) 0:00:05.643 ******** 2025-04-05 12:54:22.289542 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:22.289556 | orchestrator | 2025-04-05 12:54:22.289569 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-04-05 12:54:22.289591 | orchestrator | Saturday 05 April 2025 12:54:15 +0000 (0:00:00.215) 0:00:05.859 ******** 2025-04-05 12:54:22.289605 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:22.289619 | orchestrator | 2025-04-05 12:54:22.289644 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-04-05 12:54:22.289659 | orchestrator | Saturday 05 April 2025 12:54:15 +0000 (0:00:00.209) 0:00:06.068 ******** 2025-04-05 12:54:22.289673 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:22.289686 | orchestrator | 2025-04-05 12:54:22.289700 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-04-05 12:54:22.289714 | orchestrator | Saturday 05 April 2025 12:54:15 +0000 (0:00:00.108) 0:00:06.176 ******** 2025-04-05 12:54:22.289728 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:54:22.289742 | orchestrator | 2025-04-05 12:54:22.289755 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-04-05 12:54:22.289769 | orchestrator | Saturday 05 April 2025 12:54:17 +0000 (0:00:01.479) 0:00:07.655 ******** 2025-04-05 12:54:22.289783 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:22.289797 | orchestrator | 2025-04-05 12:54:22.289810 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-04-05 12:54:22.289824 | orchestrator | Saturday 05 April 2025 12:54:17 +0000 (0:00:00.211) 0:00:07.867 ******** 2025-04-05 12:54:22.289838 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:22.289852 | orchestrator | 2025-04-05 12:54:22.289884 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-04-05 12:54:22.289899 | orchestrator | Saturday 05 April 2025 12:54:17 +0000 (0:00:00.346) 0:00:08.214 ******** 2025-04-05 12:54:22.289913 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:22.289926 | orchestrator | 2025-04-05 12:54:22.289940 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-04-05 12:54:22.289954 | orchestrator | Saturday 05 April 2025 12:54:17 +0000 (0:00:00.138) 0:00:08.353 ******** 2025-04-05 12:54:22.289968 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:54:22.289981 | orchestrator | 2025-04-05 12:54:22.289995 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-04-05 12:54:22.290009 | orchestrator | Saturday 05 April 2025 12:54:17 +0000 (0:00:00.142) 0:00:08.495 ******** 2025-04-05 12:54:22.290069 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:22.290084 | orchestrator | 2025-04-05 12:54:22.290098 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-04-05 12:54:22.290112 | orchestrator | Saturday 05 April 2025 12:54:18 +0000 (0:00:00.240) 0:00:08.735 ******** 2025-04-05 12:54:22.290126 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:54:22.290140 | orchestrator | 2025-04-05 12:54:22.290154 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-04-05 12:54:22.290168 | orchestrator | Saturday 05 April 2025 12:54:18 +0000 (0:00:00.223) 0:00:08.959 ******** 2025-04-05 12:54:22.290181 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:22.290195 | orchestrator | 2025-04-05 12:54:22.290209 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-04-05 12:54:22.290223 | orchestrator | Saturday 05 April 2025 12:54:19 +0000 (0:00:01.150) 0:00:10.109 ******** 2025-04-05 12:54:22.290236 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:22.290250 | orchestrator | 2025-04-05 12:54:22.290264 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-04-05 12:54:22.290283 | orchestrator | Saturday 05 April 2025 12:54:19 +0000 (0:00:00.249) 0:00:10.358 ******** 2025-04-05 12:54:22.290298 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:22.290312 | orchestrator | 2025-04-05 12:54:22.290326 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-04-05 12:54:22.290339 | orchestrator | Saturday 05 April 2025 12:54:20 +0000 (0:00:00.252) 0:00:10.610 ******** 2025-04-05 12:54:22.290353 | orchestrator | 2025-04-05 12:54:22.290374 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-04-05 12:54:22.290388 | orchestrator | Saturday 05 April 2025 12:54:20 +0000 (0:00:00.070) 0:00:10.681 ******** 2025-04-05 12:54:22.290401 | orchestrator | 2025-04-05 12:54:22.290415 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-04-05 12:54:22.290429 | orchestrator | Saturday 05 April 2025 12:54:20 +0000 (0:00:00.069) 0:00:10.750 ******** 2025-04-05 12:54:22.290443 | orchestrator | 2025-04-05 12:54:22.290456 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-04-05 12:54:22.290470 | orchestrator | Saturday 05 April 2025 12:54:20 +0000 (0:00:00.070) 0:00:10.820 ******** 2025-04-05 12:54:22.290484 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:22.290498 | orchestrator | 2025-04-05 12:54:22.290511 | orchestrator | TASK [Print report file information] ******************************************* 2025-04-05 12:54:22.290525 | orchestrator | Saturday 05 April 2025 12:54:21 +0000 (0:00:01.578) 0:00:12.399 ******** 2025-04-05 12:54:22.290539 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-04-05 12:54:22.290552 | orchestrator |  "msg": [ 2025-04-05 12:54:22.290566 | orchestrator |  "Validator run completed.", 2025-04-05 12:54:22.290623 | orchestrator |  "You can find the report file here:", 2025-04-05 12:54:22.290638 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-04-05T12:54:10+00:00-report.json", 2025-04-05 12:54:22.290653 | orchestrator |  "on the following host:", 2025-04-05 12:54:22.290667 | orchestrator |  "testbed-manager" 2025-04-05 12:54:22.290681 | orchestrator |  ] 2025-04-05 12:54:22.290694 | orchestrator | } 2025-04-05 12:54:22.290708 | orchestrator | 2025-04-05 12:54:22.290722 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:54:22.290736 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-04-05 12:54:22.290752 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:54:22.290775 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:54:22.557438 | orchestrator | 2025-04-05 12:54:22.772842 | orchestrator | 2025-04-05 12:54:22.772954 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:54:22.772972 | orchestrator | Saturday 05 April 2025 12:54:22 +0000 (0:00:00.408) 0:00:12.807 ******** 2025-04-05 12:54:22.772986 | orchestrator | =============================================================================== 2025-04-05 12:54:22.773000 | orchestrator | Write report file ------------------------------------------------------- 1.58s 2025-04-05 12:54:22.773015 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.48s 2025-04-05 12:54:22.773029 | orchestrator | Aggregate test results step one ----------------------------------------- 1.15s 2025-04-05 12:54:22.773043 | orchestrator | Get container info ------------------------------------------------------ 1.00s 2025-04-05 12:54:22.773057 | orchestrator | Create report output directory ------------------------------------------ 0.72s 2025-04-05 12:54:22.773071 | orchestrator | Get timestamp for report file ------------------------------------------- 0.54s 2025-04-05 12:54:22.773085 | orchestrator | Aggregate test results step one ----------------------------------------- 0.49s 2025-04-05 12:54:22.773099 | orchestrator | Print report file information ------------------------------------------- 0.41s 2025-04-05 12:54:22.773113 | orchestrator | Set test result to passed if container is existing ---------------------- 0.41s 2025-04-05 12:54:22.773127 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.35s 2025-04-05 12:54:22.773141 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2025-04-05 12:54:22.773155 | orchestrator | Prepare test data ------------------------------------------------------- 0.28s 2025-04-05 12:54:22.773191 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.28s 2025-04-05 12:54:22.773206 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.27s 2025-04-05 12:54:22.773220 | orchestrator | Set test result to failed if container is missing ----------------------- 0.26s 2025-04-05 12:54:22.773234 | orchestrator | Aggregate test results step three --------------------------------------- 0.25s 2025-04-05 12:54:22.773248 | orchestrator | Aggregate test results step two ----------------------------------------- 0.25s 2025-04-05 12:54:22.773274 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.24s 2025-04-05 12:54:22.773288 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.22s 2025-04-05 12:54:22.773303 | orchestrator | Aggregate test results step three --------------------------------------- 0.22s 2025-04-05 12:54:22.773329 | orchestrator | + osism validate ceph-osds 2025-04-05 12:54:32.699658 | orchestrator | 2025-04-05 12:54:32.699758 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-04-05 12:54:32.699774 | orchestrator | 2025-04-05 12:54:32.699788 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-04-05 12:54:32.699801 | orchestrator | Saturday 05 April 2025 12:54:28 +0000 (0:00:00.407) 0:00:00.407 ******** 2025-04-05 12:54:32.699814 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:32.699827 | orchestrator | 2025-04-05 12:54:32.699840 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-05 12:54:32.699852 | orchestrator | Saturday 05 April 2025 12:54:29 +0000 (0:00:00.627) 0:00:01.035 ******** 2025-04-05 12:54:32.699907 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:32.699921 | orchestrator | 2025-04-05 12:54:32.699933 | orchestrator | TASK [Create report output directory] ****************************************** 2025-04-05 12:54:32.699946 | orchestrator | Saturday 05 April 2025 12:54:29 +0000 (0:00:00.377) 0:00:01.413 ******** 2025-04-05 12:54:32.699958 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:32.699971 | orchestrator | 2025-04-05 12:54:32.699984 | orchestrator | TASK [Define report vars] ****************************************************** 2025-04-05 12:54:32.699997 | orchestrator | Saturday 05 April 2025 12:54:30 +0000 (0:00:00.897) 0:00:02.310 ******** 2025-04-05 12:54:32.700009 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:32.700023 | orchestrator | 2025-04-05 12:54:32.700036 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-04-05 12:54:32.700048 | orchestrator | Saturday 05 April 2025 12:54:30 +0000 (0:00:00.127) 0:00:02.438 ******** 2025-04-05 12:54:32.700061 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:54:32.700074 | orchestrator | 2025-04-05 12:54:32.700086 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-04-05 12:54:32.700100 | orchestrator | Saturday 05 April 2025 12:54:30 +0000 (0:00:00.131) 0:00:02.570 ******** 2025-04-05 12:54:32.700113 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:54:32.700125 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:54:32.700138 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:54:32.700150 | orchestrator | 2025-04-05 12:54:32.700163 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-04-05 12:54:32.700175 | orchestrator | Saturday 05 April 2025 12:54:30 +0000 (0:00:00.280) 0:00:02.851 ******** 2025-04-05 12:54:32.700188 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:32.700202 | orchestrator | 2025-04-05 12:54:32.700216 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-04-05 12:54:32.700230 | orchestrator | Saturday 05 April 2025 12:54:31 +0000 (0:00:00.145) 0:00:02.996 ******** 2025-04-05 12:54:32.700243 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:32.700257 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:54:32.700271 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:54:32.700285 | orchestrator | 2025-04-05 12:54:32.700299 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-04-05 12:54:32.700333 | orchestrator | Saturday 05 April 2025 12:54:31 +0000 (0:00:00.304) 0:00:03.301 ******** 2025-04-05 12:54:32.700347 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:32.700361 | orchestrator | 2025-04-05 12:54:32.700375 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-04-05 12:54:32.700389 | orchestrator | Saturday 05 April 2025 12:54:31 +0000 (0:00:00.523) 0:00:03.824 ******** 2025-04-05 12:54:32.700402 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:32.700416 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:54:32.700429 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:54:32.700455 | orchestrator | 2025-04-05 12:54:32.700469 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-04-05 12:54:32.700483 | orchestrator | Saturday 05 April 2025 12:54:32 +0000 (0:00:00.449) 0:00:04.274 ******** 2025-04-05 12:54:32.700503 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cdc7be81c5095cb38beee87afb9f0098e0e9a85fe42f9467a129efb5f8e145c4', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-04-05 12:54:32.700519 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7f5803913bffcf4ccf583fc28d50be028fbb468ea0e151e571bd1f5a02e6c4a3', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-04-05 12:54:32.700534 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'df518ee6be8490ed80ea42d250734b5cafcd8f21aa8cb3fa17208513e4f2f6fd', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-04-05 12:54:32.700550 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0734e248778a63922176dd25df1bac3050b93d97a9869907c145b88dbc6dc347', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-04-05 12:54:32.700564 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3c674413299479d72e212e0994d05f3778e93797686b176e7b6f7d958bcf5d41', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2025-04-05 12:54:32.700593 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'dbbea41b09c157f6d69d0239233cba05d60abf0ee5dbcabf6f6f9d76601550c9', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-04-05 12:54:32.700607 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f21e57277f3ca5088ab672fbae045d78ad663deceb88cd38514ff5bfeb9f4dfc', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 11 minutes'})  2025-04-05 12:54:32.700620 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9ea8a5e30999662f310719b1f20737145893f1afc3052eccb66bc0afb3aab1f0', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-04-05 12:54:32.700632 | orchestrator | skipping: [testbed-node-3] => (item={'id': '407fa247dae26c1aa2f22fd38b9d722ecfffe87c8386f40ef7b9f4a62182ccc5', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-04-05 12:54:32.700645 | orchestrator | skipping: [testbed-node-3] => (item={'id': '715adfaba6fa054b8430d4de428fdc202e8572ba590ad69308c96f3b9d57993a', 'image': 'registry.osism.tech/osism/ceph-daemon:quincy', 'name': '/ceph-rgw-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-04-05 12:54:32.700658 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4f1ea4315214c0634c5d527269f0a9944794c4a2cb85fc4c5e141fac9ec0ec34', 'image': 'registry.osism.tech/osism/ceph-daemon:quincy', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2025-04-05 12:54:32.700677 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3d9f0369dc17c29ce7ed7f27a89c19bf703988a877a16f7fac7c84032427104b', 'image': 'registry.osism.tech/osism/ceph-daemon:quincy', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-04-05 12:54:32.700690 | orchestrator | ok: [testbed-node-3] => (item={'id': '1d546a963ce04febadd352f925aea74804e79cdb4cf716af3f72cc48e1f8c003', 'image': 'registry.osism.tech/osism/ceph-daemon:quincy', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-04-05 12:54:32.700703 | orchestrator | ok: [testbed-node-3] => (item={'id': '0aa0f10c3628d127f3071c8e25a4fe278c02d3af566d064fa5c7ed18ece8c8ee', 'image': 'registry.osism.tech/osism/ceph-daemon:quincy', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-04-05 12:54:32.700716 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e1b205d24bba21ad9bb39ddefd4c67ce67a88d5ecb3ee5509016d4b1a3a28e3c', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-04-05 12:54:32.700728 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9be5b90dd8dc92e7ae36b2d027b4e704342ce81faf2707a99fec3a1606f4280b', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-04-05 12:54:32.700740 | orchestrator | skipping: [testbed-node-3] => (item={'id': '299d4969919d015b196139dd0167a8d790aad6f0c6a1e76f738c683e315a8440', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-04-05 12:54:32.700754 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8d09ea8badd8cff3109bae4fde81151391cd76ecdd9c3967269f4da289939fc2', 'image': 'registry.osism.tech/kolla/cron:2024.1', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-04-05 12:54:32.700766 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a3401152c90aa0a525a56fa666ba329ea65f23d76607228259731c104a28ffdd', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-04-05 12:54:32.700778 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e2bef2f191d64e280e418f497705cc9e725c5549936edd29afca81da1f75e306', 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-04-05 12:54:32.700791 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6ef2e6da786b12030f946dff1adf3eabb11b92a12b5b7ca0fdef0e107679339e', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-04-05 12:54:32.700809 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b620794886e9721c3fa5ced91695bfde0e0f28f18f124004185726a198e7ad97', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-04-05 12:54:32.824756 | orchestrator | skipping: [testbed-node-4] => (item={'id': '61d99b061c97585771609e1dc55433222e4d197f22eb6ed676f70a4c4a152bac', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-04-05 12:54:32.824810 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a19399808cd91a9163136c2d355c6e5a8be832dfbdc3bcdb177f56dead701d8b', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-04-05 12:54:32.824824 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5e5ff9a7ca5fca71a781ec266504cc829e814657de44707ed48923e6598e67b9', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2025-04-05 12:54:32.824838 | orchestrator | skipping: [testbed-node-4] => (item={'id': '55a021aa1e23e70112d344c27a1105c10e5d3a922f1c9d4e6462d976b051222a', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-04-05 12:54:32.824911 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0b2f07798293bb3dbeabf2e27e89a5a451673d1159f52356b2aff26177965e8a', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 11 minutes'})  2025-04-05 12:54:32.824925 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1f3ab90a1cc66b50d0f5238be8c7ca961e59634e69e5043b9ed03cff219d4020', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-04-05 12:54:32.824938 | orchestrator | skipping: [testbed-node-4] => (item={'id': '35c9efd60e242acf09703760717710cef2b774fe21f6fc7b05ed238ec2263194', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-04-05 12:54:32.824951 | orchestrator | skipping: [testbed-node-4] => (item={'id': '80fa68b15c475d8b11ddc679499d3abbaa5a87da4de93d7ed8d289b95e1047b0', 'image': 'registry.osism.tech/osism/ceph-daemon:quincy', 'name': '/ceph-rgw-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-04-05 12:54:32.824965 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9b58a0749b42b006c87209bfeca3072425ede04d36afb0dbe42179fe42259bdd', 'image': 'registry.osism.tech/osism/ceph-daemon:quincy', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2025-04-05 12:54:32.824977 | orchestrator | skipping: [testbed-node-4] => (item={'id': '20a304e2aa4fa630108ac40510a7a7a931168509431afec70621fd0a51bba539', 'image': 'registry.osism.tech/osism/ceph-daemon:quincy', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-04-05 12:54:32.824994 | orchestrator | ok: [testbed-node-4] => (item={'id': 'adf94157117adbda65cdd3361bb388917151f0056e9f24943e8f5f313bf2d0da', 'image': 'registry.osism.tech/osism/ceph-daemon:quincy', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-04-05 12:54:32.825008 | orchestrator | ok: [testbed-node-4] => (item={'id': '5bdc266b3b9fb12dc139cd2a1191df81fab59971f6cd4f370b7b6579b50e16d8', 'image': 'registry.osism.tech/osism/ceph-daemon:quincy', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-04-05 12:54:32.825021 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ccf59f19007459db80ebebb4447d785b8c44cb50dbd70066b5b9d461b5880362', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-04-05 12:54:32.825034 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c5dcc87953e77f21894d8af6a7d9cc647dd0f120d3ae52ad26abbd96efb71145', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-04-05 12:54:32.825046 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1ee4bfd33c3374bc4eae3eb740d7ce211455006749500f3512a841becdab004c', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-04-05 12:54:32.825067 | orchestrator | skipping: [testbed-node-4] => (item={'id': '28dfbc5eea2873396286f040eb282c4acaa2556ee808195ce3ce6f509960c6a3', 'image': 'registry.osism.tech/kolla/cron:2024.1', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-04-05 12:54:32.825081 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8dac96d30a02449bbd2fad809412269e56689368bc821c0c91c2122494624594', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-04-05 12:54:32.825094 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2a856bbe27b11c4e9129ab85e282dfa5f3d9bec47b1f40b02dd1b5c0812eaef5', 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-04-05 12:54:32.825113 | orchestrator | skipping: [testbed-node-5] => (item={'id': '97d9f5150149af8f9f448ce31e9a12f8377ed4cc8f3958633bf271e5da33e82b', 'image': 'registry.osism.tech/kolla/nova-compute:2024.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-04-05 12:54:32.825126 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cc8478d3e63061357f090f5cc0cad28be5694397b051458b660dbb748cefb3d6', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-04-05 12:54:32.825139 | orchestrator | skipping: [testbed-node-5] => (item={'id': '610189d6a1fe212175e7e990ff2313b7d38054886c0ab8a378e572e3243682a6', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-04-05 12:54:32.825151 | orchestrator | skipping: [testbed-node-5] => (item={'id': '63e406844804b1a0769a9502e8c78aa66528afdf99793766ebf72acf0bae0e1c', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-04-05 12:54:32.825164 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6d0c0e64ead13ecbf94e1f8bb285e60bdab79194db5aa47e095911ec610b1b82', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2025-04-05 12:54:32.825177 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fbcfbdcd27cc04d2c0f82a2803891b6d06cd1f5ac2bd69079c185092fdd65460', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-04-05 12:54:32.825189 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'af9eab26fc61137216bef57065c67c8ce2406d20bbdb330d34bf2d814d07ee92', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 11 minutes'})  2025-04-05 12:54:32.825202 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9067e331f03290084d23f2b173164d592fbf7ddc5cb239b2fe9bc25a959d2ae7', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-04-05 12:54:32.825214 | orchestrator | skipping: [testbed-node-5] => (item={'id': '74334d3ddf83d2e0789f9c0860396b41fb69948ae41aeb24c3a2f3a85d039de8', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-04-05 12:54:32.825227 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c6c678f833da0f3dca488c84b4c7cd2a2779efb26bd85aec96b6cc573f981a69', 'image': 'registry.osism.tech/osism/ceph-daemon:quincy', 'name': '/ceph-rgw-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-04-05 12:54:32.825244 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bddd88d1d8b0f68a45716ccfd55dccec6aa2f2c4b6bf85d0f175ee0bc4d401fd', 'image': 'registry.osism.tech/osism/ceph-daemon:quincy', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2025-04-05 12:54:32.825268 | orchestrator | skipping: [testbed-node-5] => (item={'id': '51d2a6125ca251b705efddcb5259ae46b7e23ddd6f5761e86c60ef732b407f7e', 'image': 'registry.osism.tech/osism/ceph-daemon:quincy', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-04-05 12:54:32.825288 | orchestrator | ok: [testbed-node-5] => (item={'id': 'baa725be74b51193dfc05f1c495a81147ccf1625b3aa033227473f8ee9af0453', 'image': 'registry.osism.tech/osism/ceph-daemon:quincy', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-04-05 12:54:41.091190 | orchestrator | ok: [testbed-node-5] => (item={'id': 'b90be6d5ddf217bf479467b17d2eb71d7a1af2c29fad240a38384a1a29d4f2ee', 'image': 'registry.osism.tech/osism/ceph-daemon:quincy', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-04-05 12:54:41.091311 | orchestrator | skipping: [testbed-node-5] => (item={'id': '64eaf3ceccd95112db5fe0ea6f583f607639155bba31b3ba02c620c3dbb523d1', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-04-05 12:54:41.091331 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0dc78148702c229e9f323c47218c6ac1317a65b2b24061d9a874a768092d8524', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-04-05 12:54:41.091347 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a22b0e3dfa79a28ce7861c73ed2a7ed7f4716bf4053be4377c76c8c8c696bc98', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-04-05 12:54:41.091362 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd2e90c36c456f57995caf2bd95b6d0cc3350c2ee3d88454eeb4d6f16cdbbc7e6', 'image': 'registry.osism.tech/kolla/cron:2024.1', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-04-05 12:54:41.091376 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd764c2dbe67384ac99e34d64b20fd6197c994dc3506dbd3e93432d456ec1f6ff', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-04-05 12:54:41.091390 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8fb5e6d49fcf966e692bccd0454a53b97ea537a57834668f504bc88bbc9fa071', 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-04-05 12:54:41.091404 | orchestrator | 2025-04-05 12:54:41.091419 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-04-05 12:54:41.091434 | orchestrator | Saturday 05 April 2025 12:54:32 +0000 (0:00:00.429) 0:00:04.703 ******** 2025-04-05 12:54:41.091447 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:41.091462 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:54:41.091476 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:54:41.091489 | orchestrator | 2025-04-05 12:54:41.091502 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-04-05 12:54:41.091516 | orchestrator | Saturday 05 April 2025 12:54:33 +0000 (0:00:00.279) 0:00:04.983 ******** 2025-04-05 12:54:41.091529 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:54:41.091543 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:54:41.091557 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:54:41.091570 | orchestrator | 2025-04-05 12:54:41.091584 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-04-05 12:54:41.091597 | orchestrator | Saturday 05 April 2025 12:54:33 +0000 (0:00:00.497) 0:00:05.480 ******** 2025-04-05 12:54:41.091611 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:41.091624 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:54:41.091637 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:54:41.091650 | orchestrator | 2025-04-05 12:54:41.091680 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-04-05 12:54:41.091693 | orchestrator | Saturday 05 April 2025 12:54:33 +0000 (0:00:00.307) 0:00:05.788 ******** 2025-04-05 12:54:41.091706 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:41.091720 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:54:41.091733 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:54:41.091747 | orchestrator | 2025-04-05 12:54:41.091761 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-04-05 12:54:41.091775 | orchestrator | Saturday 05 April 2025 12:54:34 +0000 (0:00:00.288) 0:00:06.077 ******** 2025-04-05 12:54:41.091790 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-04-05 12:54:41.091828 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-04-05 12:54:41.091842 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:54:41.091856 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-04-05 12:54:41.091898 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-04-05 12:54:41.091912 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:54:41.091927 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-04-05 12:54:41.091941 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-04-05 12:54:41.091955 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:54:41.091968 | orchestrator | 2025-04-05 12:54:41.091982 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-04-05 12:54:41.091997 | orchestrator | Saturday 05 April 2025 12:54:34 +0000 (0:00:00.304) 0:00:06.381 ******** 2025-04-05 12:54:41.092011 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:41.092024 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:54:41.092037 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:54:41.092049 | orchestrator | 2025-04-05 12:54:41.092072 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-04-05 12:54:41.092086 | orchestrator | Saturday 05 April 2025 12:54:34 +0000 (0:00:00.439) 0:00:06.821 ******** 2025-04-05 12:54:41.092098 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:54:41.092111 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:54:41.092123 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:54:41.092135 | orchestrator | 2025-04-05 12:54:41.092148 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-04-05 12:54:41.092163 | orchestrator | Saturday 05 April 2025 12:54:35 +0000 (0:00:00.286) 0:00:07.107 ******** 2025-04-05 12:54:41.092175 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:54:41.092188 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:54:41.092200 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:54:41.092213 | orchestrator | 2025-04-05 12:54:41.092225 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-04-05 12:54:41.092238 | orchestrator | Saturday 05 April 2025 12:54:35 +0000 (0:00:00.286) 0:00:07.394 ******** 2025-04-05 12:54:41.092250 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:41.092262 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:54:41.092275 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:54:41.092287 | orchestrator | 2025-04-05 12:54:41.092299 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-04-05 12:54:41.092312 | orchestrator | Saturday 05 April 2025 12:54:35 +0000 (0:00:00.290) 0:00:07.685 ******** 2025-04-05 12:54:41.092324 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:54:41.092336 | orchestrator | 2025-04-05 12:54:41.092349 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-04-05 12:54:41.092361 | orchestrator | Saturday 05 April 2025 12:54:36 +0000 (0:00:00.669) 0:00:08.354 ******** 2025-04-05 12:54:41.092373 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:54:41.092385 | orchestrator | 2025-04-05 12:54:41.092398 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-04-05 12:54:41.092410 | orchestrator | Saturday 05 April 2025 12:54:36 +0000 (0:00:00.252) 0:00:08.607 ******** 2025-04-05 12:54:41.092422 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:54:41.092435 | orchestrator | 2025-04-05 12:54:41.092447 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-04-05 12:54:41.092459 | orchestrator | Saturday 05 April 2025 12:54:36 +0000 (0:00:00.214) 0:00:08.822 ******** 2025-04-05 12:54:41.092471 | orchestrator | 2025-04-05 12:54:41.092484 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-04-05 12:54:41.092496 | orchestrator | Saturday 05 April 2025 12:54:37 +0000 (0:00:00.064) 0:00:08.886 ******** 2025-04-05 12:54:41.092517 | orchestrator | 2025-04-05 12:54:41.092529 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-04-05 12:54:41.092541 | orchestrator | Saturday 05 April 2025 12:54:37 +0000 (0:00:00.064) 0:00:08.950 ******** 2025-04-05 12:54:41.092554 | orchestrator | 2025-04-05 12:54:41.092566 | orchestrator | TASK [Print report file information] ******************************************* 2025-04-05 12:54:41.092578 | orchestrator | Saturday 05 April 2025 12:54:37 +0000 (0:00:00.066) 0:00:09.017 ******** 2025-04-05 12:54:41.092590 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:54:41.092603 | orchestrator | 2025-04-05 12:54:41.092615 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-04-05 12:54:41.092627 | orchestrator | Saturday 05 April 2025 12:54:37 +0000 (0:00:00.242) 0:00:09.260 ******** 2025-04-05 12:54:41.092639 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:54:41.092652 | orchestrator | 2025-04-05 12:54:41.092664 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-04-05 12:54:41.092676 | orchestrator | Saturday 05 April 2025 12:54:37 +0000 (0:00:00.237) 0:00:09.497 ******** 2025-04-05 12:54:41.092688 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:41.092700 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:54:41.092713 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:54:41.092725 | orchestrator | 2025-04-05 12:54:41.092737 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-04-05 12:54:41.092755 | orchestrator | Saturday 05 April 2025 12:54:37 +0000 (0:00:00.295) 0:00:09.792 ******** 2025-04-05 12:54:41.092768 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:41.092780 | orchestrator | 2025-04-05 12:54:41.092793 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-04-05 12:54:41.092805 | orchestrator | Saturday 05 April 2025 12:54:38 +0000 (0:00:00.659) 0:00:10.452 ******** 2025-04-05 12:54:41.092817 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-05 12:54:41.092829 | orchestrator | 2025-04-05 12:54:41.092841 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-04-05 12:54:41.092853 | orchestrator | Saturday 05 April 2025 12:54:40 +0000 (0:00:01.574) 0:00:12.027 ******** 2025-04-05 12:54:41.092881 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:41.092894 | orchestrator | 2025-04-05 12:54:41.092907 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-04-05 12:54:41.092919 | orchestrator | Saturday 05 April 2025 12:54:40 +0000 (0:00:00.130) 0:00:12.157 ******** 2025-04-05 12:54:41.092931 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:41.092943 | orchestrator | 2025-04-05 12:54:41.092955 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-04-05 12:54:41.092968 | orchestrator | Saturday 05 April 2025 12:54:40 +0000 (0:00:00.248) 0:00:12.405 ******** 2025-04-05 12:54:41.092980 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:54:41.092992 | orchestrator | 2025-04-05 12:54:41.093005 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-04-05 12:54:41.093016 | orchestrator | Saturday 05 April 2025 12:54:40 +0000 (0:00:00.127) 0:00:12.533 ******** 2025-04-05 12:54:41.093029 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:41.093041 | orchestrator | 2025-04-05 12:54:41.093053 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-04-05 12:54:41.093065 | orchestrator | Saturday 05 April 2025 12:54:40 +0000 (0:00:00.134) 0:00:12.667 ******** 2025-04-05 12:54:41.093077 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:41.093089 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:54:41.093101 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:54:41.093114 | orchestrator | 2025-04-05 12:54:41.093126 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-04-05 12:54:41.093144 | orchestrator | Saturday 05 April 2025 12:54:41 +0000 (0:00:00.294) 0:00:12.961 ******** 2025-04-05 12:54:51.732396 | orchestrator | changed: [testbed-node-3] 2025-04-05 12:54:51.732459 | orchestrator | changed: [testbed-node-4] 2025-04-05 12:54:51.732495 | orchestrator | changed: [testbed-node-5] 2025-04-05 12:54:51.732509 | orchestrator | 2025-04-05 12:54:51.732523 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-04-05 12:54:51.732536 | orchestrator | Saturday 05 April 2025 12:54:42 +0000 (0:00:01.543) 0:00:14.505 ******** 2025-04-05 12:54:51.732549 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:51.732563 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:54:51.732575 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:54:51.732588 | orchestrator | 2025-04-05 12:54:51.732601 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-04-05 12:54:51.732614 | orchestrator | Saturday 05 April 2025 12:54:42 +0000 (0:00:00.314) 0:00:14.819 ******** 2025-04-05 12:54:51.732626 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:51.732639 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:54:51.732665 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:54:51.732677 | orchestrator | 2025-04-05 12:54:51.732690 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-04-05 12:54:51.732703 | orchestrator | Saturday 05 April 2025 12:54:43 +0000 (0:00:00.407) 0:00:15.226 ******** 2025-04-05 12:54:51.732715 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:54:51.732727 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:54:51.732740 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:54:51.732753 | orchestrator | 2025-04-05 12:54:51.732765 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-04-05 12:54:51.732778 | orchestrator | Saturday 05 April 2025 12:54:43 +0000 (0:00:00.305) 0:00:15.532 ******** 2025-04-05 12:54:51.732790 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:51.732802 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:54:51.732815 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:54:51.732827 | orchestrator | 2025-04-05 12:54:51.732840 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-04-05 12:54:51.732852 | orchestrator | Saturday 05 April 2025 12:54:44 +0000 (0:00:00.540) 0:00:16.072 ******** 2025-04-05 12:54:51.732893 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:54:51.732906 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:54:51.732918 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:54:51.732930 | orchestrator | 2025-04-05 12:54:51.732943 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-04-05 12:54:51.732955 | orchestrator | Saturday 05 April 2025 12:54:44 +0000 (0:00:00.280) 0:00:16.353 ******** 2025-04-05 12:54:51.732969 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:54:51.732983 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:54:51.732997 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:54:51.733010 | orchestrator | 2025-04-05 12:54:51.733024 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-04-05 12:54:51.733037 | orchestrator | Saturday 05 April 2025 12:54:44 +0000 (0:00:00.279) 0:00:16.632 ******** 2025-04-05 12:54:51.733051 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:51.733065 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:54:51.733078 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:54:51.733092 | orchestrator | 2025-04-05 12:54:51.733105 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-04-05 12:54:51.733119 | orchestrator | Saturday 05 April 2025 12:54:45 +0000 (0:00:00.410) 0:00:17.043 ******** 2025-04-05 12:54:51.733133 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:51.733146 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:54:51.733160 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:54:51.733174 | orchestrator | 2025-04-05 12:54:51.733187 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-04-05 12:54:51.733207 | orchestrator | Saturday 05 April 2025 12:54:45 +0000 (0:00:00.699) 0:00:17.743 ******** 2025-04-05 12:54:51.733221 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:51.733234 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:54:51.733248 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:54:51.733262 | orchestrator | 2025-04-05 12:54:51.733276 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-04-05 12:54:51.733297 | orchestrator | Saturday 05 April 2025 12:54:46 +0000 (0:00:00.303) 0:00:18.046 ******** 2025-04-05 12:54:51.733309 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:54:51.733322 | orchestrator | skipping: [testbed-node-4] 2025-04-05 12:54:51.733334 | orchestrator | skipping: [testbed-node-5] 2025-04-05 12:54:51.733346 | orchestrator | 2025-04-05 12:54:51.733359 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-04-05 12:54:51.733371 | orchestrator | Saturday 05 April 2025 12:54:46 +0000 (0:00:00.286) 0:00:18.333 ******** 2025-04-05 12:54:51.733383 | orchestrator | ok: [testbed-node-3] 2025-04-05 12:54:51.733396 | orchestrator | ok: [testbed-node-4] 2025-04-05 12:54:51.733408 | orchestrator | ok: [testbed-node-5] 2025-04-05 12:54:51.733420 | orchestrator | 2025-04-05 12:54:51.733432 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-04-05 12:54:51.733445 | orchestrator | Saturday 05 April 2025 12:54:46 +0000 (0:00:00.309) 0:00:18.642 ******** 2025-04-05 12:54:51.733457 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:51.733469 | orchestrator | 2025-04-05 12:54:51.733482 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-04-05 12:54:51.733494 | orchestrator | Saturday 05 April 2025 12:54:47 +0000 (0:00:00.608) 0:00:19.250 ******** 2025-04-05 12:54:51.733506 | orchestrator | skipping: [testbed-node-3] 2025-04-05 12:54:51.733518 | orchestrator | 2025-04-05 12:54:51.733530 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-04-05 12:54:51.733543 | orchestrator | Saturday 05 April 2025 12:54:47 +0000 (0:00:00.235) 0:00:19.486 ******** 2025-04-05 12:54:51.733555 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:51.733567 | orchestrator | 2025-04-05 12:54:51.733579 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-04-05 12:54:51.733591 | orchestrator | Saturday 05 April 2025 12:54:49 +0000 (0:00:01.519) 0:00:21.006 ******** 2025-04-05 12:54:51.733604 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:51.733616 | orchestrator | 2025-04-05 12:54:51.733628 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-04-05 12:54:51.733640 | orchestrator | Saturday 05 April 2025 12:54:49 +0000 (0:00:00.250) 0:00:21.257 ******** 2025-04-05 12:54:51.733662 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:51.938012 | orchestrator | 2025-04-05 12:54:51.938116 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-04-05 12:54:51.938130 | orchestrator | Saturday 05 April 2025 12:54:49 +0000 (0:00:00.243) 0:00:21.500 ******** 2025-04-05 12:54:51.938142 | orchestrator | 2025-04-05 12:54:51.938155 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-04-05 12:54:51.938167 | orchestrator | Saturday 05 April 2025 12:54:49 +0000 (0:00:00.065) 0:00:21.565 ******** 2025-04-05 12:54:51.938180 | orchestrator | 2025-04-05 12:54:51.938192 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-04-05 12:54:51.938204 | orchestrator | Saturday 05 April 2025 12:54:49 +0000 (0:00:00.064) 0:00:21.630 ******** 2025-04-05 12:54:51.938217 | orchestrator | 2025-04-05 12:54:51.938229 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-04-05 12:54:51.938242 | orchestrator | Saturday 05 April 2025 12:54:49 +0000 (0:00:00.069) 0:00:21.699 ******** 2025-04-05 12:54:51.938254 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-05 12:54:51.938280 | orchestrator | 2025-04-05 12:54:51.938293 | orchestrator | TASK [Print report file information] ******************************************* 2025-04-05 12:54:51.938305 | orchestrator | Saturday 05 April 2025 12:54:50 +0000 (0:00:01.142) 0:00:22.841 ******** 2025-04-05 12:54:51.938317 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-04-05 12:54:51.938330 | orchestrator |  "msg": [ 2025-04-05 12:54:51.938343 | orchestrator |  "Validator run completed.", 2025-04-05 12:54:51.938356 | orchestrator |  "You can find the report file here:", 2025-04-05 12:54:51.938383 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-04-05T12:54:29+00:00-report.json", 2025-04-05 12:54:51.938397 | orchestrator |  "on the following host:", 2025-04-05 12:54:51.938409 | orchestrator |  "testbed-manager" 2025-04-05 12:54:51.938421 | orchestrator |  ] 2025-04-05 12:54:51.938434 | orchestrator | } 2025-04-05 12:54:51.938447 | orchestrator | 2025-04-05 12:54:51.938459 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:54:51.938472 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-04-05 12:54:51.938486 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-04-05 12:54:51.938499 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-04-05 12:54:51.938511 | orchestrator | 2025-04-05 12:54:51.938524 | orchestrator | 2025-04-05 12:54:51.938536 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:54:51.938549 | orchestrator | Saturday 05 April 2025 12:54:51 +0000 (0:00:00.522) 0:00:23.364 ******** 2025-04-05 12:54:51.938561 | orchestrator | =============================================================================== 2025-04-05 12:54:51.938574 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.57s 2025-04-05 12:54:51.938586 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.54s 2025-04-05 12:54:51.938601 | orchestrator | Aggregate test results step one ----------------------------------------- 1.52s 2025-04-05 12:54:51.938620 | orchestrator | Write report file ------------------------------------------------------- 1.14s 2025-04-05 12:54:51.938634 | orchestrator | Create report output directory ------------------------------------------ 0.90s 2025-04-05 12:54:51.938648 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.70s 2025-04-05 12:54:51.938662 | orchestrator | Aggregate test results step one ----------------------------------------- 0.67s 2025-04-05 12:54:51.938676 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.66s 2025-04-05 12:54:51.938689 | orchestrator | Get timestamp for report file ------------------------------------------- 0.63s 2025-04-05 12:54:51.938703 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.61s 2025-04-05 12:54:51.938717 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.54s 2025-04-05 12:54:51.938731 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.52s 2025-04-05 12:54:51.938744 | orchestrator | Print report file information ------------------------------------------- 0.52s 2025-04-05 12:54:51.938758 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.50s 2025-04-05 12:54:51.938772 | orchestrator | Prepare test data ------------------------------------------------------- 0.45s 2025-04-05 12:54:51.938786 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.44s 2025-04-05 12:54:51.938800 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.43s 2025-04-05 12:54:51.938814 | orchestrator | Prepare test data ------------------------------------------------------- 0.41s 2025-04-05 12:54:51.938828 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.41s 2025-04-05 12:54:51.938841 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.38s 2025-04-05 12:54:51.938882 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-04-05 12:54:51.945798 | orchestrator | + set -e 2025-04-05 12:54:51.945826 | orchestrator | + source /opt/manager-vars.sh 2025-04-05 12:54:51.945841 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-05 12:54:51.945855 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-05 12:54:51.945892 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-05 12:54:51.945905 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-05 12:54:51.945930 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-05 12:54:51.945942 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-05 12:54:51.945955 | orchestrator | ++ export MANAGER_VERSION=latest 2025-04-05 12:54:51.945967 | orchestrator | ++ MANAGER_VERSION=latest 2025-04-05 12:54:51.945980 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-05 12:54:51.945992 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-05 12:54:51.946005 | orchestrator | ++ export ARA=false 2025-04-05 12:54:51.946142 | orchestrator | ++ ARA=false 2025-04-05 12:54:51.946164 | orchestrator | ++ export TEMPEST=false 2025-04-05 12:54:51.946177 | orchestrator | ++ TEMPEST=false 2025-04-05 12:54:51.946190 | orchestrator | ++ export IS_ZUUL=true 2025-04-05 12:54:51.946204 | orchestrator | ++ IS_ZUUL=true 2025-04-05 12:54:51.946217 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-04-05 12:54:51.946230 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-04-05 12:54:51.946244 | orchestrator | ++ export EXTERNAL_API=false 2025-04-05 12:54:51.946257 | orchestrator | ++ EXTERNAL_API=false 2025-04-05 12:54:51.946271 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-05 12:54:51.946284 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-05 12:54:51.946303 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-05 12:54:51.946317 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-05 12:54:51.946331 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-05 12:54:51.946344 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-05 12:54:51.946357 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-04-05 12:54:51.946371 | orchestrator | + source /etc/os-release 2025-04-05 12:54:51.946389 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-04-05 12:54:51.958953 | orchestrator | ++ NAME=Ubuntu 2025-04-05 12:54:51.958977 | orchestrator | ++ VERSION_ID=24.04 2025-04-05 12:54:51.958991 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-04-05 12:54:51.959003 | orchestrator | ++ VERSION_CODENAME=noble 2025-04-05 12:54:51.959016 | orchestrator | ++ ID=ubuntu 2025-04-05 12:54:51.959029 | orchestrator | ++ ID_LIKE=debian 2025-04-05 12:54:51.959041 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-04-05 12:54:51.959054 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-04-05 12:54:51.959066 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-04-05 12:54:51.959079 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-04-05 12:54:51.959093 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-04-05 12:54:51.959106 | orchestrator | ++ LOGO=ubuntu-logo 2025-04-05 12:54:51.959118 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-04-05 12:54:51.959131 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-04-05 12:54:51.959144 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-04-05 12:54:51.959162 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-04-05 12:55:10.261253 | orchestrator | 2025-04-05 12:55:10.412106 | orchestrator | # Status of Elasticsearch 2025-04-05 12:55:10.412189 | orchestrator | 2025-04-05 12:55:10.412206 | orchestrator | + pushd /opt/configuration/contrib 2025-04-05 12:55:10.412222 | orchestrator | + echo 2025-04-05 12:55:10.412237 | orchestrator | + echo '# Status of Elasticsearch' 2025-04-05 12:55:10.412251 | orchestrator | + echo 2025-04-05 12:55:10.412265 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-04-05 12:55:10.412315 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 21; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=21 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-04-05 12:55:10.440053 | orchestrator | 2025-04-05 12:55:10.440084 | orchestrator | + echo 2025-04-05 12:55:10.440099 | orchestrator | # Status of MariaDB 2025-04-05 12:55:10.440114 | orchestrator | 2025-04-05 12:55:10.440129 | orchestrator | + echo '# Status of MariaDB' 2025-04-05 12:55:10.440143 | orchestrator | + echo 2025-04-05 12:55:10.440157 | orchestrator | + MARIADB_USER=root_shard_0 2025-04-05 12:55:10.440171 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-04-05 12:55:10.462922 | orchestrator | Reading package lists... 2025-04-05 12:55:10.718578 | orchestrator | Building dependency tree... 2025-04-05 12:55:10.719296 | orchestrator | Reading state information... 2025-04-05 12:55:11.055165 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-04-05 12:55:11.674252 | orchestrator | bc set to manually installed. 2025-04-05 12:55:11.674351 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-04-05 12:55:11.674383 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-04-05 12:55:11.674744 | orchestrator | 2025-04-05 12:55:11.734946 | orchestrator | # Status of Prometheus 2025-04-05 12:55:11.735012 | orchestrator | 2025-04-05 12:55:11.735027 | orchestrator | + echo 2025-04-05 12:55:11.735041 | orchestrator | + echo '# Status of Prometheus' 2025-04-05 12:55:11.735055 | orchestrator | + echo 2025-04-05 12:55:11.735070 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-04-05 12:55:11.735096 | orchestrator | Unauthorized 2025-04-05 12:55:11.737325 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-04-05 12:55:11.794920 | orchestrator | Unauthorized 2025-04-05 12:55:11.797264 | orchestrator | 2025-04-05 12:55:12.224999 | orchestrator | # Status of RabbitMQ 2025-04-05 12:55:12.225079 | orchestrator | 2025-04-05 12:55:12.225094 | orchestrator | + echo 2025-04-05 12:55:12.225108 | orchestrator | + echo '# Status of RabbitMQ' 2025-04-05 12:55:12.225122 | orchestrator | + echo 2025-04-05 12:55:12.225137 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-04-05 12:55:12.225166 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-04-05 12:55:12.233710 | orchestrator | 2025-04-05 12:55:12.239669 | orchestrator | # Status of Redis 2025-04-05 12:55:12.239696 | orchestrator | 2025-04-05 12:55:12.239711 | orchestrator | + echo 2025-04-05 12:55:12.239725 | orchestrator | + echo '# Status of Redis' 2025-04-05 12:55:12.239739 | orchestrator | + echo 2025-04-05 12:55:12.239754 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-04-05 12:55:12.239775 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001395s;;;0.000000;10.000000 2025-04-05 12:55:12.240469 | orchestrator | 2025-04-05 12:55:13.865064 | orchestrator | # Create backup of MariaDB database 2025-04-05 12:55:13.865172 | orchestrator | 2025-04-05 12:55:13.865192 | orchestrator | + popd 2025-04-05 12:55:13.865207 | orchestrator | + echo 2025-04-05 12:55:13.865221 | orchestrator | + echo '# Create backup of MariaDB database' 2025-04-05 12:55:13.865235 | orchestrator | + echo 2025-04-05 12:55:13.865250 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-04-05 12:55:13.865283 | orchestrator | 2025-04-05 12:55:13 | INFO  | Task 219b5034-0f3e-4946-828e-aa52a1ad2ed5 (mariadb_backup) was prepared for execution. 2025-04-05 12:55:16.780001 | orchestrator | 2025-04-05 12:55:13 | INFO  | It takes a moment until task 219b5034-0f3e-4946-828e-aa52a1ad2ed5 (mariadb_backup) has been started and output is visible here. 2025-04-05 12:55:16.780143 | orchestrator | 2025-04-05 12:55:16.781763 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:55:16.782269 | orchestrator | 2025-04-05 12:55:16.782303 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:55:16.787721 | orchestrator | Saturday 05 April 2025 12:55:16 +0000 (0:00:00.142) 0:00:00.142 ******** 2025-04-05 12:55:17.013154 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:55:17.135286 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:55:17.140074 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:55:17.140681 | orchestrator | 2025-04-05 12:55:17.142182 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:55:17.142914 | orchestrator | Saturday 05 April 2025 12:55:17 +0000 (0:00:00.351) 0:00:00.493 ******** 2025-04-05 12:55:17.766281 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-04-05 12:55:17.767209 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-04-05 12:55:17.767910 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-04-05 12:55:17.767944 | orchestrator | 2025-04-05 12:55:17.769189 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-04-05 12:55:17.769933 | orchestrator | 2025-04-05 12:55:17.770575 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-04-05 12:55:17.771265 | orchestrator | Saturday 05 April 2025 12:55:17 +0000 (0:00:00.638) 0:00:01.131 ******** 2025-04-05 12:55:18.188026 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-05 12:55:18.188357 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-04-05 12:55:18.189119 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-04-05 12:55:18.190062 | orchestrator | 2025-04-05 12:55:18.190672 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-05 12:55:18.191147 | orchestrator | Saturday 05 April 2025 12:55:18 +0000 (0:00:00.421) 0:00:01.553 ******** 2025-04-05 12:55:18.836698 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:55:18.837269 | orchestrator | 2025-04-05 12:55:18.837312 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-04-05 12:55:18.837966 | orchestrator | Saturday 05 April 2025 12:55:18 +0000 (0:00:00.645) 0:00:02.199 ******** 2025-04-05 12:55:21.818349 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:55:21.818869 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:55:21.819064 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:55:21.819093 | orchestrator | 2025-04-05 12:55:21.819416 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-04-05 12:55:21.819712 | orchestrator | Saturday 05 April 2025 12:55:21 +0000 (0:00:02.984) 0:00:05.184 ******** 2025-04-05 12:55:37.935710 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-04-05 12:55:37.937156 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-04-05 12:55:37.937194 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-04-05 12:55:37.937216 | orchestrator | mariadb_bootstrap_restart 2025-04-05 12:55:38.005973 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:55:38.006741 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:55:38.008809 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:55:38.012306 | orchestrator | 2025-04-05 12:55:38.013907 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-04-05 12:55:38.014131 | orchestrator | skipping: no hosts matched 2025-04-05 12:55:38.020123 | orchestrator | 2025-04-05 12:55:38.021002 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-04-05 12:55:38.023991 | orchestrator | skipping: no hosts matched 2025-04-05 12:55:38.024259 | orchestrator | 2025-04-05 12:55:38.025358 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-04-05 12:55:38.026243 | orchestrator | skipping: no hosts matched 2025-04-05 12:55:38.028138 | orchestrator | 2025-04-05 12:55:38.030655 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-04-05 12:55:38.031286 | orchestrator | 2025-04-05 12:55:38.031623 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-04-05 12:55:38.033079 | orchestrator | Saturday 05 April 2025 12:55:38 +0000 (0:00:16.189) 0:00:21.373 ******** 2025-04-05 12:55:38.263448 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:55:38.383789 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:55:38.384775 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:55:38.384814 | orchestrator | 2025-04-05 12:55:38.385420 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-04-05 12:55:38.386098 | orchestrator | Saturday 05 April 2025 12:55:38 +0000 (0:00:00.371) 0:00:21.745 ******** 2025-04-05 12:55:38.550393 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:55:38.590814 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:55:38.591130 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:55:38.591671 | orchestrator | 2025-04-05 12:55:38.592505 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:55:38.593237 | orchestrator | 2025-04-05 12:55:38 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:55:38.593870 | orchestrator | 2025-04-05 12:55:38 | INFO  | Please wait and do not abort execution. 2025-04-05 12:55:38.593972 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:55:38.594365 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-05 12:55:38.595237 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-05 12:55:38.595870 | orchestrator | 2025-04-05 12:55:38.597119 | orchestrator | 2025-04-05 12:55:38.597673 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:55:38.598698 | orchestrator | Saturday 05 April 2025 12:55:38 +0000 (0:00:00.213) 0:00:21.958 ******** 2025-04-05 12:55:38.599593 | orchestrator | =============================================================================== 2025-04-05 12:55:38.600315 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 16.19s 2025-04-05 12:55:38.601485 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 2.98s 2025-04-05 12:55:38.602871 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.65s 2025-04-05 12:55:38.604016 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2025-04-05 12:55:38.604048 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.42s 2025-04-05 12:55:38.604957 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.37s 2025-04-05 12:55:38.605848 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-04-05 12:55:38.606341 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.21s 2025-04-05 12:55:39.214693 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=incremental 2025-04-05 12:55:40.844249 | orchestrator | 2025-04-05 12:55:40 | INFO  | Task 2cfac128-339b-4a37-ab37-fe31dbac847e (mariadb_backup) was prepared for execution. 2025-04-05 12:55:44.106546 | orchestrator | 2025-04-05 12:55:40 | INFO  | It takes a moment until task 2cfac128-339b-4a37-ab37-fe31dbac847e (mariadb_backup) has been started and output is visible here. 2025-04-05 12:55:44.106644 | orchestrator | 2025-04-05 12:55:44.108635 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-05 12:55:44.109566 | orchestrator | 2025-04-05 12:55:44.110372 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-05 12:55:44.111323 | orchestrator | Saturday 05 April 2025 12:55:44 +0000 (0:00:00.164) 0:00:00.164 ******** 2025-04-05 12:55:44.477543 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:55:44.577461 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:55:44.577743 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:55:44.581400 | orchestrator | 2025-04-05 12:55:44.582786 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-05 12:55:44.584376 | orchestrator | Saturday 05 April 2025 12:55:44 +0000 (0:00:00.467) 0:00:00.632 ******** 2025-04-05 12:55:45.272375 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-04-05 12:55:45.274373 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-04-05 12:55:45.274616 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-04-05 12:55:45.275620 | orchestrator | 2025-04-05 12:55:45.276442 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-04-05 12:55:45.277558 | orchestrator | 2025-04-05 12:55:45.278174 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-04-05 12:55:45.279145 | orchestrator | Saturday 05 April 2025 12:55:45 +0000 (0:00:00.698) 0:00:01.330 ******** 2025-04-05 12:55:45.737162 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-05 12:55:45.738651 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-04-05 12:55:45.739712 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-04-05 12:55:45.742074 | orchestrator | 2025-04-05 12:55:45.742733 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-05 12:55:45.743099 | orchestrator | Saturday 05 April 2025 12:55:45 +0000 (0:00:00.466) 0:00:01.796 ******** 2025-04-05 12:55:46.383084 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-05 12:55:46.383445 | orchestrator | 2025-04-05 12:55:46.384544 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-04-05 12:55:46.385216 | orchestrator | Saturday 05 April 2025 12:55:46 +0000 (0:00:00.641) 0:00:02.438 ******** 2025-04-05 12:55:49.374372 | orchestrator | ok: [testbed-node-0] 2025-04-05 12:55:49.375143 | orchestrator | ok: [testbed-node-1] 2025-04-05 12:55:49.376524 | orchestrator | ok: [testbed-node-2] 2025-04-05 12:55:49.377584 | orchestrator | 2025-04-05 12:55:49.378393 | orchestrator | TASK [mariadb : Taking incremental database backup via Mariabackup] ************ 2025-04-05 12:55:49.379624 | orchestrator | Saturday 05 April 2025 12:55:49 +0000 (0:00:02.993) 0:00:05.432 ******** 2025-04-05 12:56:04.811855 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-04-05 12:56:04.812970 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-04-05 12:56:04.813029 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-04-05 12:56:04.813667 | orchestrator | mariadb_bootstrap_restart 2025-04-05 12:56:04.878989 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:56:04.880195 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:56:04.881054 | orchestrator | changed: [testbed-node-0] 2025-04-05 12:56:04.882634 | orchestrator | 2025-04-05 12:56:04.883731 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-04-05 12:56:04.886129 | orchestrator | skipping: no hosts matched 2025-04-05 12:56:04.886974 | orchestrator | 2025-04-05 12:56:04.887772 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-04-05 12:56:04.888336 | orchestrator | skipping: no hosts matched 2025-04-05 12:56:04.891623 | orchestrator | 2025-04-05 12:56:04.892240 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-04-05 12:56:04.893079 | orchestrator | skipping: no hosts matched 2025-04-05 12:56:04.894320 | orchestrator | 2025-04-05 12:56:04.894692 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-04-05 12:56:04.895567 | orchestrator | 2025-04-05 12:56:04.897671 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-04-05 12:56:04.898184 | orchestrator | Saturday 05 April 2025 12:56:04 +0000 (0:00:15.507) 0:00:20.939 ******** 2025-04-05 12:56:05.159115 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:56:05.267642 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:56:05.268309 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:56:05.269316 | orchestrator | 2025-04-05 12:56:05.273030 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-04-05 12:56:05.432761 | orchestrator | Saturday 05 April 2025 12:56:05 +0000 (0:00:00.383) 0:00:21.323 ******** 2025-04-05 12:56:05.432884 | orchestrator | skipping: [testbed-node-0] 2025-04-05 12:56:05.470239 | orchestrator | skipping: [testbed-node-1] 2025-04-05 12:56:05.472027 | orchestrator | skipping: [testbed-node-2] 2025-04-05 12:56:05.473181 | orchestrator | 2025-04-05 12:56:05.474420 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 12:56:05.474772 | orchestrator | 2025-04-05 12:56:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 12:56:05.475118 | orchestrator | 2025-04-05 12:56:05 | INFO  | Please wait and do not abort execution. 2025-04-05 12:56:05.476008 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-05 12:56:05.476632 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-05 12:56:05.477570 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-05 12:56:05.478206 | orchestrator | 2025-04-05 12:56:05.478708 | orchestrator | 2025-04-05 12:56:05.479231 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 12:56:05.479888 | orchestrator | Saturday 05 April 2025 12:56:05 +0000 (0:00:00.206) 0:00:21.529 ******** 2025-04-05 12:56:05.480267 | orchestrator | =============================================================================== 2025-04-05 12:56:05.480718 | orchestrator | mariadb : Taking incremental database backup via Mariabackup ----------- 15.51s 2025-04-05 12:56:05.481145 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 2.99s 2025-04-05 12:56:05.481718 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2025-04-05 12:56:05.482243 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.64s 2025-04-05 12:56:05.482995 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2025-04-05 12:56:05.483343 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.47s 2025-04-05 12:56:05.483821 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.38s 2025-04-05 12:56:05.484199 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.21s 2025-04-05 12:56:06.024080 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-04-05 12:56:06.029699 | orchestrator | + set -e 2025-04-05 12:56:06.030968 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-05 12:56:06.030998 | orchestrator | ++ export INTERACTIVE=false 2025-04-05 12:56:06.031012 | orchestrator | ++ INTERACTIVE=false 2025-04-05 12:56:06.031024 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-05 12:56:06.031037 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-05 12:56:06.031049 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-04-05 12:56:06.031068 | orchestrator | +++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2025-04-05 12:56:06.073841 | orchestrator | 2025-04-05 12:56:09.334728 | orchestrator | # OpenStack endpoints 2025-04-05 12:56:09.334829 | orchestrator | 2025-04-05 12:56:09.334844 | orchestrator | ++ export MANAGER_VERSION=latest 2025-04-05 12:56:09.334858 | orchestrator | ++ MANAGER_VERSION=latest 2025-04-05 12:56:09.334870 | orchestrator | + export OS_CLOUD=admin 2025-04-05 12:56:09.334883 | orchestrator | + OS_CLOUD=admin 2025-04-05 12:56:09.334937 | orchestrator | + echo 2025-04-05 12:56:09.334952 | orchestrator | + echo '# OpenStack endpoints' 2025-04-05 12:56:09.334965 | orchestrator | + echo 2025-04-05 12:56:09.334977 | orchestrator | + openstack endpoint list 2025-04-05 12:56:09.335007 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-04-05 12:56:09.335022 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-04-05 12:56:09.335034 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-04-05 12:56:09.335047 | orchestrator | | 015511ef6c784932ace685867d781d7b | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-04-05 12:56:09.335060 | orchestrator | | 04e6a19952534c57bcb81018b8bd5d8a | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-04-05 12:56:09.335072 | orchestrator | | 138ab1f803c94c318cf8c7b6b352f21b | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-04-05 12:56:09.335085 | orchestrator | | 3703c64c5bbe4c229daa13103dd35dcc | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-04-05 12:56:09.335097 | orchestrator | | 470ed5f19c074e34b061612e6963b37a | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-04-05 12:56:09.335144 | orchestrator | | 59182e12ca4e4ad090adb6a9e0cbd882 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-04-05 12:56:09.335158 | orchestrator | | 5e335ed2ed704364b323a915f533ba5f | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-04-05 12:56:09.335172 | orchestrator | | 64de5e664b4f4c40a373983ffddbd05d | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-04-05 12:56:09.335184 | orchestrator | | 6f1d242637e2410c87ebb88718d8f659 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-04-05 12:56:09.335198 | orchestrator | | 832b96726f4c4697932814d3ca8d8395 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-04-05 12:56:09.335211 | orchestrator | | 83a6e0b9defb4ee2b927ada4c0bb46b6 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-04-05 12:56:09.335223 | orchestrator | | 88f9a5ee470b42e5b9d9e486d9aeccbd | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-04-05 12:56:09.335235 | orchestrator | | 8a0f8319a8574d14b9bdc967f2fb7734 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-04-05 12:56:09.335248 | orchestrator | | 9ee7b529eae845398aa499bd5aa04aa5 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-04-05 12:56:09.335261 | orchestrator | | bd5ae5f0a3a14088ab259ff2e7ec1b0b | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-04-05 12:56:09.335273 | orchestrator | | c5c4580e0f3446b1b294c0e52aa61130 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-04-05 12:56:09.335286 | orchestrator | | c8fd4f08458d4bf8ab9f1f4fc1241e1d | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-04-05 12:56:09.335300 | orchestrator | | caa11681c7f542968aeabe4f272c5937 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-04-05 12:56:09.335315 | orchestrator | | cb12ad2b731a4a6f95b87956920a6ae6 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-04-05 12:56:09.335329 | orchestrator | | d7bd692a886f4307a5f492e98b181b2a | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-04-05 12:56:09.335352 | orchestrator | | e92c7abbe8c34607a90f7140c6aaf762 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-04-05 12:56:09.538098 | orchestrator | | f44e49fa588e47d887fafe220109a75c | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-04-05 12:56:09.538139 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-04-05 12:56:09.538161 | orchestrator | 2025-04-05 12:56:12.153322 | orchestrator | # Cinder 2025-04-05 12:56:12.153435 | orchestrator | 2025-04-05 12:56:12.153453 | orchestrator | + echo 2025-04-05 12:56:12.153468 | orchestrator | + echo '# Cinder' 2025-04-05 12:56:12.153482 | orchestrator | + echo 2025-04-05 12:56:12.153496 | orchestrator | + openstack volume service list 2025-04-05 12:56:12.153528 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-04-05 12:56:12.424367 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-04-05 12:56:12.424462 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-04-05 12:56:12.424478 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-04-05T12:56:08.000000 | 2025-04-05 12:56:12.424492 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-04-05T12:56:08.000000 | 2025-04-05 12:56:12.424507 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-04-05T12:56:09.000000 | 2025-04-05 12:56:12.424522 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-04-05T12:56:08.000000 | 2025-04-05 12:56:12.424535 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-04-05T12:56:08.000000 | 2025-04-05 12:56:12.424562 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-04-05T12:56:08.000000 | 2025-04-05 12:56:12.424577 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-04-05T12:56:03.000000 | 2025-04-05 12:56:12.424592 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-04-05T12:56:03.000000 | 2025-04-05 12:56:12.424608 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-04-05T12:56:03.000000 | 2025-04-05 12:56:12.424623 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-04-05 12:56:12.424652 | orchestrator | 2025-04-05 12:56:15.571657 | orchestrator | # Neutron 2025-04-05 12:56:15.571758 | orchestrator | 2025-04-05 12:56:15.571774 | orchestrator | + echo 2025-04-05 12:56:15.571787 | orchestrator | + echo '# Neutron' 2025-04-05 12:56:15.571800 | orchestrator | + echo 2025-04-05 12:56:15.571812 | orchestrator | + openstack network agent list 2025-04-05 12:56:15.571841 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-04-05 12:56:15.795374 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-04-05 12:56:15.795454 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-04-05 12:56:15.795468 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-04-05 12:56:15.795482 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-04-05 12:56:15.795494 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-04-05 12:56:15.795507 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-04-05 12:56:15.795519 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-04-05 12:56:15.795532 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-04-05 12:56:15.795544 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-04-05 12:56:15.795556 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-04-05 12:56:15.795569 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-04-05 12:56:15.795607 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-04-05 12:56:15.795650 | orchestrator | + openstack network service provider list 2025-04-05 12:56:18.392890 | orchestrator | +---------------+------+---------+ 2025-04-05 12:56:18.603604 | orchestrator | | Service Type | Name | Default | 2025-04-05 12:56:18.603661 | orchestrator | +---------------+------+---------+ 2025-04-05 12:56:18.603674 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-04-05 12:56:18.603687 | orchestrator | +---------------+------+---------+ 2025-04-05 12:56:18.603710 | orchestrator | 2025-04-05 12:56:21.152102 | orchestrator | # Nova 2025-04-05 12:56:21.152210 | orchestrator | 2025-04-05 12:56:21.152227 | orchestrator | + echo 2025-04-05 12:56:21.152241 | orchestrator | + echo '# Nova' 2025-04-05 12:56:21.152255 | orchestrator | + echo 2025-04-05 12:56:21.152269 | orchestrator | + openstack compute service list 2025-04-05 12:56:21.152301 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-04-05 12:56:21.394449 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-04-05 12:56:21.394537 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-04-05 12:56:21.394550 | orchestrator | | a143483a-ee96-451a-a429-70f9c1ad99c2 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-04-05T12:56:11.000000 | 2025-04-05 12:56:21.394562 | orchestrator | | 306f48c7-f646-4ca3-b0c3-79038723c54e | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-04-05T12:56:20.000000 | 2025-04-05 12:56:21.394574 | orchestrator | | 0e28b820-b794-4aaa-8480-5dd558a7c47c | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-04-05T12:56:11.000000 | 2025-04-05 12:56:21.394585 | orchestrator | | 1089867e-3d3f-4206-ba45-c8e2ebfd90ef | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-04-05T12:56:19.000000 | 2025-04-05 12:56:21.394596 | orchestrator | | 91403c18-da2e-4dc1-a50b-46b71495d3ec | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-04-05T12:56:19.000000 | 2025-04-05 12:56:21.394617 | orchestrator | | 1a608f5a-195e-4d85-a287-6380aa1c4faf | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-04-05T12:56:19.000000 | 2025-04-05 12:56:21.394629 | orchestrator | | 77a4b58f-b803-4989-bedc-77a1dd4a254f | nova-compute | testbed-node-4 | nova | enabled | up | 2025-04-05T12:56:16.000000 | 2025-04-05 12:56:21.394653 | orchestrator | | 8216a0f9-07fa-4841-9240-29d831e443be | nova-compute | testbed-node-3 | nova | enabled | up | 2025-04-05T12:56:16.000000 | 2025-04-05 12:56:21.394665 | orchestrator | | ce873440-fd1d-4e0a-988a-0d9d542c0c4b | nova-compute | testbed-node-5 | nova | enabled | up | 2025-04-05T12:56:16.000000 | 2025-04-05 12:56:21.394676 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-04-05 12:56:21.394701 | orchestrator | + openstack hypervisor list 2025-04-05 12:56:24.890636 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-04-05 12:56:25.069241 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-04-05 12:56:25.069330 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-04-05 12:56:25.069348 | orchestrator | | e26ec08a-dc57-4ee7-afef-2cdad47719e0 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-04-05 12:56:25.069363 | orchestrator | | 3edc04d9-4a8c-4edc-b3ed-b4b494297cbc | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-04-05 12:56:25.069378 | orchestrator | | 77fc240c-f2eb-4678-b069-be6c0c98d6d4 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-04-05 12:56:25.069392 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-04-05 12:56:25.069420 | orchestrator | 2025-04-05 12:56:26.420572 | orchestrator | # Run OpenStack test play 2025-04-05 12:56:26.420687 | orchestrator | 2025-04-05 12:56:26.420706 | orchestrator | + echo 2025-04-05 12:56:26.420721 | orchestrator | + echo '# Run OpenStack test play' 2025-04-05 12:56:26.420737 | orchestrator | + echo 2025-04-05 12:56:26.420753 | orchestrator | + osism apply --environment openstack test 2025-04-05 12:56:26.420784 | orchestrator | 2025-04-05 12:56:26 | INFO  | Trying to run play test in environment openstack 2025-04-05 12:56:26.472834 | orchestrator | 2025-04-05 12:56:26 | INFO  | Task 51cc436a-0492-4b69-b6b1-2f10d52e3f02 (test) was prepared for execution. 2025-04-05 12:56:29.457192 | orchestrator | 2025-04-05 12:56:26 | INFO  | It takes a moment until task 51cc436a-0492-4b69-b6b1-2f10d52e3f02 (test) has been started and output is visible here. 2025-04-05 12:56:29.457309 | orchestrator | 2025-04-05 12:56:29.457710 | orchestrator | PLAY [Create test project] ***************************************************** 2025-04-05 12:56:29.457739 | orchestrator | 2025-04-05 12:56:29.457765 | orchestrator | TASK [Create test domain] ****************************************************** 2025-04-05 12:56:32.727084 | orchestrator | Saturday 05 April 2025 12:56:29 +0000 (0:00:00.055) 0:00:00.055 ******** 2025-04-05 12:56:32.727198 | orchestrator | changed: [localhost] 2025-04-05 12:56:32.727563 | orchestrator | 2025-04-05 12:56:32.727591 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-04-05 12:56:32.727611 | orchestrator | Saturday 05 April 2025 12:56:32 +0000 (0:00:03.269) 0:00:03.324 ******** 2025-04-05 12:56:36.828413 | orchestrator | changed: [localhost] 2025-04-05 12:56:36.831102 | orchestrator | 2025-04-05 12:56:36.831141 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-04-05 12:56:36.831164 | orchestrator | Saturday 05 April 2025 12:56:36 +0000 (0:00:04.100) 0:00:07.424 ******** 2025-04-05 12:56:42.246203 | orchestrator | changed: [localhost] 2025-04-05 12:56:42.246537 | orchestrator | 2025-04-05 12:56:42.247176 | orchestrator | TASK [Create test project] ***************************************************** 2025-04-05 12:56:42.247580 | orchestrator | Saturday 05 April 2025 12:56:42 +0000 (0:00:05.421) 0:00:12.846 ******** 2025-04-05 12:56:46.190658 | orchestrator | changed: [localhost] 2025-04-05 12:56:46.190877 | orchestrator | 2025-04-05 12:56:46.192049 | orchestrator | TASK [Create test user] ******************************************************** 2025-04-05 12:56:46.193600 | orchestrator | Saturday 05 April 2025 12:56:46 +0000 (0:00:03.941) 0:00:16.787 ******** 2025-04-05 12:56:50.232404 | orchestrator | changed: [localhost] 2025-04-05 12:56:50.234388 | orchestrator | 2025-04-05 12:56:50.235545 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-04-05 12:56:50.237145 | orchestrator | Saturday 05 April 2025 12:56:50 +0000 (0:00:04.042) 0:00:20.830 ******** 2025-04-05 12:57:01.007693 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-04-05 12:57:01.008026 | orchestrator | changed: [localhost] => (item=member) 2025-04-05 12:57:01.008059 | orchestrator | changed: [localhost] => (item=creator) 2025-04-05 12:57:01.008075 | orchestrator | 2025-04-05 12:57:01.008098 | orchestrator | TASK [Create test server group] ************************************************ 2025-04-05 12:57:01.009274 | orchestrator | Saturday 05 April 2025 12:57:00 +0000 (0:00:10.773) 0:00:31.604 ******** 2025-04-05 12:57:05.288443 | orchestrator | changed: [localhost] 2025-04-05 12:57:05.290009 | orchestrator | 2025-04-05 12:57:05.290835 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-04-05 12:57:05.291288 | orchestrator | Saturday 05 April 2025 12:57:05 +0000 (0:00:04.282) 0:00:35.887 ******** 2025-04-05 12:57:10.090360 | orchestrator | changed: [localhost] 2025-04-05 12:57:10.090757 | orchestrator | 2025-04-05 12:57:10.090797 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-04-05 12:57:10.091183 | orchestrator | Saturday 05 April 2025 12:57:10 +0000 (0:00:04.802) 0:00:40.690 ******** 2025-04-05 12:57:13.991425 | orchestrator | changed: [localhost] 2025-04-05 12:57:13.992193 | orchestrator | 2025-04-05 12:57:13.992334 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-04-05 12:57:18.200521 | orchestrator | Saturday 05 April 2025 12:57:13 +0000 (0:00:03.901) 0:00:44.591 ******** 2025-04-05 12:57:18.200670 | orchestrator | changed: [localhost] 2025-04-05 12:57:18.201361 | orchestrator | 2025-04-05 12:57:18.202270 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-04-05 12:57:18.204810 | orchestrator | Saturday 05 April 2025 12:57:18 +0000 (0:00:04.208) 0:00:48.799 ******** 2025-04-05 12:57:21.781450 | orchestrator | changed: [localhost] 2025-04-05 12:57:21.782908 | orchestrator | 2025-04-05 12:57:21.784639 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-04-05 12:57:21.785584 | orchestrator | Saturday 05 April 2025 12:57:21 +0000 (0:00:03.581) 0:00:52.380 ******** 2025-04-05 12:57:25.423028 | orchestrator | changed: [localhost] 2025-04-05 12:57:25.424084 | orchestrator | 2025-04-05 12:57:25.424871 | orchestrator | TASK [Create test network topology] ******************************************** 2025-04-05 12:57:25.425668 | orchestrator | Saturday 05 April 2025 12:57:25 +0000 (0:00:03.639) 0:00:56.020 ******** 2025-04-05 12:57:38.635154 | orchestrator | changed: [localhost] 2025-04-05 12:57:38.635539 | orchestrator | 2025-04-05 12:57:38.635585 | orchestrator | TASK [Create test instances] *************************************************** 2025-04-05 12:57:38.636501 | orchestrator | Saturday 05 April 2025 12:57:38 +0000 (0:00:13.213) 0:01:09.233 ******** 2025-04-05 12:59:59.366249 | orchestrator | changed: [localhost] => (item=test) 2025-04-05 12:59:59.366726 | orchestrator | changed: [localhost] => (item=test-1) 2025-04-05 12:59:59.366754 | orchestrator | changed: [localhost] => (item=test-2) 2025-04-05 12:59:59.366768 | orchestrator | changed: [localhost] => (item=test-3) 2025-04-05 12:59:59.366788 | orchestrator | 2025-04-05 12:59:59.367281 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-04-05 13:00:12.360358 | orchestrator | changed: [localhost] => (item=test-4) 2025-04-05 13:00:12.360785 | orchestrator | 2025-04-05 13:00:12.360819 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-04-05 13:00:12.360841 | orchestrator | Saturday 05 April 2025 13:00:12 +0000 (0:02:33.724) 0:03:42.958 ******** 2025-04-05 13:00:35.002569 | orchestrator | changed: [localhost] => (item=test) 2025-04-05 13:00:35.003625 | orchestrator | changed: [localhost] => (item=test-1) 2025-04-05 13:00:35.003698 | orchestrator | changed: [localhost] => (item=test-2) 2025-04-05 13:00:35.004844 | orchestrator | changed: [localhost] => (item=test-3) 2025-04-05 13:00:35.006253 | orchestrator | changed: [localhost] => (item=test-4) 2025-04-05 13:00:35.006775 | orchestrator | 2025-04-05 13:00:35.007787 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-04-05 13:00:35.008181 | orchestrator | Saturday 05 April 2025 13:00:34 +0000 (0:00:22.639) 0:04:05.598 ******** 2025-04-05 13:01:04.865765 | orchestrator | changed: [localhost] => (item=test) 2025-04-05 13:01:04.867225 | orchestrator | changed: [localhost] => (item=test-1) 2025-04-05 13:01:04.870567 | orchestrator | changed: [localhost] => (item=test-2) 2025-04-05 13:01:04.870594 | orchestrator | changed: [localhost] => (item=test-3) 2025-04-05 13:01:04.870612 | orchestrator | changed: [localhost] => (item=test-4) 2025-04-05 13:01:04.870750 | orchestrator | 2025-04-05 13:01:04.870771 | orchestrator | TASK [Create test volume] ****************************************************** 2025-04-05 13:01:04.870790 | orchestrator | Saturday 05 April 2025 13:01:04 +0000 (0:00:29.863) 0:04:35.461 ******** 2025-04-05 13:01:11.396473 | orchestrator | changed: [localhost] 2025-04-05 13:01:11.397100 | orchestrator | 2025-04-05 13:01:11.397600 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-04-05 13:01:11.398493 | orchestrator | Saturday 05 April 2025 13:01:11 +0000 (0:00:06.534) 0:04:41.996 ******** 2025-04-05 13:01:20.706397 | orchestrator | changed: [localhost] 2025-04-05 13:01:20.707089 | orchestrator | 2025-04-05 13:01:20.708531 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-04-05 13:01:20.710567 | orchestrator | Saturday 05 April 2025 13:01:20 +0000 (0:00:09.309) 0:04:51.306 ******** 2025-04-05 13:01:25.600241 | orchestrator | ok: [localhost] 2025-04-05 13:01:25.601901 | orchestrator | 2025-04-05 13:01:25.606482 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-04-05 13:01:25.606547 | orchestrator | Saturday 05 April 2025 13:01:25 +0000 (0:00:04.893) 0:04:56.200 ******** 2025-04-05 13:01:25.640140 | orchestrator | ok: [localhost] => { 2025-04-05 13:01:25.641332 | orchestrator |  "msg": "192.168.112.134" 2025-04-05 13:01:25.642110 | orchestrator | } 2025-04-05 13:01:25.645506 | orchestrator | 2025-04-05 13:01:25.646412 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-05 13:01:25.646447 | orchestrator | 2025-04-05 13:01:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-05 13:01:25.646692 | orchestrator | 2025-04-05 13:01:25 | INFO  | Please wait and do not abort execution. 2025-04-05 13:01:25.646721 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-05 13:01:25.647084 | orchestrator | 2025-04-05 13:01:25.648591 | orchestrator | 2025-04-05 13:01:25.648871 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-05 13:01:25.649698 | orchestrator | Saturday 05 April 2025 13:01:25 +0000 (0:00:00.041) 0:04:56.241 ******** 2025-04-05 13:01:25.650593 | orchestrator | =============================================================================== 2025-04-05 13:01:25.650894 | orchestrator | Create test instances ------------------------------------------------- 153.72s 2025-04-05 13:01:25.651627 | orchestrator | Add tag to instances --------------------------------------------------- 29.86s 2025-04-05 13:01:25.652380 | orchestrator | Add metadata to instances ---------------------------------------------- 22.64s 2025-04-05 13:01:25.653222 | orchestrator | Create test network topology ------------------------------------------- 13.21s 2025-04-05 13:01:25.653366 | orchestrator | Add member roles to user test ------------------------------------------ 10.77s 2025-04-05 13:01:25.653826 | orchestrator | Attach test volume ------------------------------------------------------ 9.31s 2025-04-05 13:01:25.655533 | orchestrator | Create test volume ------------------------------------------------------ 6.53s 2025-04-05 13:01:25.656033 | orchestrator | Add manager role to user test-admin ------------------------------------- 5.42s 2025-04-05 13:01:25.657102 | orchestrator | Create floating ip address ---------------------------------------------- 4.89s 2025-04-05 13:01:25.658006 | orchestrator | Create ssh security group ----------------------------------------------- 4.80s 2025-04-05 13:01:25.658861 | orchestrator | Create test server group ------------------------------------------------ 4.28s 2025-04-05 13:01:25.659768 | orchestrator | Create icmp security group ---------------------------------------------- 4.21s 2025-04-05 13:01:25.660490 | orchestrator | Create test-admin user -------------------------------------------------- 4.10s 2025-04-05 13:01:25.660896 | orchestrator | Create test user -------------------------------------------------------- 4.04s 2025-04-05 13:01:25.661688 | orchestrator | Create test project ----------------------------------------------------- 3.94s 2025-04-05 13:01:25.662118 | orchestrator | Add rule to ssh security group ------------------------------------------ 3.90s 2025-04-05 13:01:25.662665 | orchestrator | Create test keypair ----------------------------------------------------- 3.64s 2025-04-05 13:01:25.663177 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.58s 2025-04-05 13:01:25.663971 | orchestrator | Create test domain ------------------------------------------------------ 3.27s 2025-04-05 13:01:25.664332 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-04-05 13:01:26.137129 | orchestrator | + server_list 2025-04-05 13:01:29.239224 | orchestrator | + openstack --os-cloud test server list 2025-04-05 13:01:29.239353 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-04-05 13:01:29.565512 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-04-05 13:01:29.565602 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-04-05 13:01:29.565645 | orchestrator | | 76a0a4bf-1d15-4f3d-9ab9-8d46fc609d83 | test-4 | ACTIVE | auto_allocated_network=10.42.0.19, 192.168.112.154 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-04-05 13:01:29.565660 | orchestrator | | 5e4fa5b7-fccc-4123-b5dc-4037dc8c48ce | test-3 | ACTIVE | auto_allocated_network=10.42.0.36, 192.168.112.106 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-04-05 13:01:29.565674 | orchestrator | | b9fefd74-1db8-4d09-b18f-ee1cbb4e5473 | test-2 | ACTIVE | auto_allocated_network=10.42.0.29, 192.168.112.192 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-04-05 13:01:29.565688 | orchestrator | | 33fc9d7f-d353-49d1-8762-f612f329d00b | test-1 | ACTIVE | auto_allocated_network=10.42.0.6, 192.168.112.143 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-04-05 13:01:29.565702 | orchestrator | | 342042fe-ea3c-4465-bb95-d21f91d37bda | test | ACTIVE | auto_allocated_network=10.42.0.37, 192.168.112.134 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-04-05 13:01:29.565716 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-04-05 13:01:29.565745 | orchestrator | + openstack --os-cloud test server show test 2025-04-05 13:01:33.146718 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-04-05 13:01:33.146834 | orchestrator | | Field | Value | 2025-04-05 13:01:33.146854 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-04-05 13:01:33.146869 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-04-05 13:01:33.146883 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-04-05 13:01:33.146897 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-04-05 13:01:33.146938 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-04-05 13:01:33.146954 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-04-05 13:01:33.146989 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-04-05 13:01:33.147003 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-04-05 13:01:33.147017 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-04-05 13:01:33.147045 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-04-05 13:01:33.147062 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-04-05 13:01:33.147076 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-04-05 13:01:33.147090 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-04-05 13:01:33.147104 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-04-05 13:01:33.147118 | orchestrator | | OS-EXT-STS:task_state | None | 2025-04-05 13:01:33.147132 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-04-05 13:01:33.147146 | orchestrator | | OS-SRV-USG:launched_at | 2025-04-05T12:57:59.000000 | 2025-04-05 13:01:33.147168 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-04-05 13:01:33.147182 | orchestrator | | accessIPv4 | | 2025-04-05 13:01:33.147199 | orchestrator | | accessIPv6 | | 2025-04-05 13:01:33.147213 | orchestrator | | addresses | auto_allocated_network=10.42.0.37, 192.168.112.134 | 2025-04-05 13:01:33.147234 | orchestrator | | config_drive | | 2025-04-05 13:01:33.147251 | orchestrator | | created | 2025-04-05T12:57:46Z | 2025-04-05 13:01:33.147268 | orchestrator | | description | None | 2025-04-05 13:01:33.147284 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-04-05 13:01:33.147300 | orchestrator | | hostId | b8a36f8772853a7276a60d8b6842574aca8bcf940b778dd1cf2c67b9 | 2025-04-05 13:01:33.147316 | orchestrator | | host_status | None | 2025-04-05 13:01:33.147332 | orchestrator | | id | 342042fe-ea3c-4465-bb95-d21f91d37bda | 2025-04-05 13:01:33.147354 | orchestrator | | image | Cirros 0.6.2 (b962deaa-948a-4930-8038-82279d78067e) | 2025-04-05 13:01:33.147371 | orchestrator | | key_name | test | 2025-04-05 13:01:33.147393 | orchestrator | | locked | False | 2025-04-05 13:01:33.147411 | orchestrator | | locked_reason | None | 2025-04-05 13:01:33.147427 | orchestrator | | name | test | 2025-04-05 13:01:33.147448 | orchestrator | | progress | 0 | 2025-04-05 13:01:33.147464 | orchestrator | | project_id | e5bb4cf6179d464b9f1fe45602a3a22f | 2025-04-05 13:01:33.147480 | orchestrator | | properties | hostname='test' | 2025-04-05 13:01:33.147497 | orchestrator | | security_groups | name='icmp' | 2025-04-05 13:01:33.147513 | orchestrator | | | name='ssh' | 2025-04-05 13:01:33.147529 | orchestrator | | server_groups | ['b1642cf9-4e5f-4de1-a2b2-859dde7f8b38'] | 2025-04-05 13:01:33.147551 | orchestrator | | status | ACTIVE | 2025-04-05 13:01:33.147571 | orchestrator | | tags | test | 2025-04-05 13:01:33.147587 | orchestrator | | trusted_image_certificates | None | 2025-04-05 13:01:33.147601 | orchestrator | | updated | 2025-04-05T13:00:16Z | 2025-04-05 13:01:33.147615 | orchestrator | | user_id | c84fa5d5301b4b688c4a8fb04f1ce27b | 2025-04-05 13:01:33.147634 | orchestrator | | volumes_attached | delete_on_termination='False', id='897ba766-e4f4-498a-815a-9cc89cb5c772' | 2025-04-05 13:01:33.156734 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-04-05 13:01:33.406777 | orchestrator | + openstack --os-cloud test server show test-1 2025-04-05 13:01:36.590108 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-04-05 13:01:36.590222 | orchestrator | | Field | Value | 2025-04-05 13:01:36.590763 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-04-05 13:01:36.590806 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-04-05 13:01:36.590820 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-04-05 13:01:36.590846 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-04-05 13:01:36.590859 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-04-05 13:01:36.590872 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-04-05 13:01:36.590884 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-04-05 13:01:36.590897 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-04-05 13:01:36.590932 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-04-05 13:01:36.590959 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-04-05 13:01:36.590973 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-04-05 13:01:36.590985 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-04-05 13:01:36.591005 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-04-05 13:01:36.591024 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-04-05 13:01:36.591037 | orchestrator | | OS-EXT-STS:task_state | None | 2025-04-05 13:01:36.591050 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-04-05 13:01:36.591062 | orchestrator | | OS-SRV-USG:launched_at | 2025-04-05T12:58:33.000000 | 2025-04-05 13:01:36.591075 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-04-05 13:01:36.591087 | orchestrator | | accessIPv4 | | 2025-04-05 13:01:36.591100 | orchestrator | | accessIPv6 | | 2025-04-05 13:01:36.591113 | orchestrator | | addresses | auto_allocated_network=10.42.0.6, 192.168.112.143 | 2025-04-05 13:01:36.591131 | orchestrator | | config_drive | | 2025-04-05 13:01:36.591145 | orchestrator | | created | 2025-04-05T12:58:21Z | 2025-04-05 13:01:36.591163 | orchestrator | | description | None | 2025-04-05 13:01:36.591181 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-04-05 13:01:36.591194 | orchestrator | | hostId | fdbed5c4df402c414789b7ade234c320d7e1dfb386ccb916754b858b | 2025-04-05 13:01:36.591207 | orchestrator | | host_status | None | 2025-04-05 13:01:36.591219 | orchestrator | | id | 33fc9d7f-d353-49d1-8762-f612f329d00b | 2025-04-05 13:01:36.591232 | orchestrator | | image | Cirros 0.6.2 (b962deaa-948a-4930-8038-82279d78067e) | 2025-04-05 13:01:36.591244 | orchestrator | | key_name | test | 2025-04-05 13:01:36.591257 | orchestrator | | locked | False | 2025-04-05 13:01:36.591269 | orchestrator | | locked_reason | None | 2025-04-05 13:01:36.591282 | orchestrator | | name | test-1 | 2025-04-05 13:01:36.591311 | orchestrator | | progress | 0 | 2025-04-05 13:01:36.591324 | orchestrator | | project_id | e5bb4cf6179d464b9f1fe45602a3a22f | 2025-04-05 13:01:36.591337 | orchestrator | | properties | hostname='test-1' | 2025-04-05 13:01:36.591350 | orchestrator | | security_groups | name='icmp' | 2025-04-05 13:01:36.591362 | orchestrator | | | name='ssh' | 2025-04-05 13:01:36.591375 | orchestrator | | server_groups | ['b1642cf9-4e5f-4de1-a2b2-859dde7f8b38'] | 2025-04-05 13:01:36.591387 | orchestrator | | status | ACTIVE | 2025-04-05 13:01:36.591400 | orchestrator | | tags | test | 2025-04-05 13:01:36.591412 | orchestrator | | trusted_image_certificates | None | 2025-04-05 13:01:36.591427 | orchestrator | | updated | 2025-04-05T13:00:21Z | 2025-04-05 13:01:36.591439 | orchestrator | | user_id | c84fa5d5301b4b688c4a8fb04f1ce27b | 2025-04-05 13:01:36.591468 | orchestrator | | volumes_attached | | 2025-04-05 13:01:36.595004 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-04-05 13:01:36.823164 | orchestrator | + openstack --os-cloud test server show test-2 2025-04-05 13:01:40.334863 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-04-05 13:01:40.334987 | orchestrator | | Field | Value | 2025-04-05 13:01:40.335007 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-04-05 13:01:40.335021 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-04-05 13:01:40.335035 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-04-05 13:01:40.335050 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-04-05 13:01:40.335087 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-04-05 13:01:40.335103 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-04-05 13:01:40.335117 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-04-05 13:01:40.335168 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-04-05 13:01:40.335183 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-04-05 13:01:40.335210 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-04-05 13:01:40.335225 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-04-05 13:01:40.335239 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-04-05 13:01:40.335254 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-04-05 13:01:40.335268 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-04-05 13:01:40.335283 | orchestrator | | OS-EXT-STS:task_state | None | 2025-04-05 13:01:40.335297 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-04-05 13:01:40.335311 | orchestrator | | OS-SRV-USG:launched_at | 2025-04-05T12:59:06.000000 | 2025-04-05 13:01:40.335326 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-04-05 13:01:40.335354 | orchestrator | | accessIPv4 | | 2025-04-05 13:01:40.335369 | orchestrator | | accessIPv6 | | 2025-04-05 13:01:40.335387 | orchestrator | | addresses | auto_allocated_network=10.42.0.29, 192.168.112.192 | 2025-04-05 13:01:40.335409 | orchestrator | | config_drive | | 2025-04-05 13:01:40.335425 | orchestrator | | created | 2025-04-05T12:58:54Z | 2025-04-05 13:01:40.335442 | orchestrator | | description | None | 2025-04-05 13:01:40.335458 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-04-05 13:01:40.335475 | orchestrator | | hostId | 796d33097c2276375f0354e277e8b0e4b3c1a4eff9ff6397c0259c3b | 2025-04-05 13:01:40.335491 | orchestrator | | host_status | None | 2025-04-05 13:01:40.335507 | orchestrator | | id | b9fefd74-1db8-4d09-b18f-ee1cbb4e5473 | 2025-04-05 13:01:40.335534 | orchestrator | | image | Cirros 0.6.2 (b962deaa-948a-4930-8038-82279d78067e) | 2025-04-05 13:01:40.335550 | orchestrator | | key_name | test | 2025-04-05 13:01:40.335566 | orchestrator | | locked | False | 2025-04-05 13:01:40.335582 | orchestrator | | locked_reason | None | 2025-04-05 13:01:40.335598 | orchestrator | | name | test-2 | 2025-04-05 13:01:40.335621 | orchestrator | | progress | 0 | 2025-04-05 13:01:40.335638 | orchestrator | | project_id | e5bb4cf6179d464b9f1fe45602a3a22f | 2025-04-05 13:01:40.335654 | orchestrator | | properties | hostname='test-2' | 2025-04-05 13:01:40.335669 | orchestrator | | security_groups | name='icmp' | 2025-04-05 13:01:40.335685 | orchestrator | | | name='ssh' | 2025-04-05 13:01:40.335701 | orchestrator | | server_groups | ['b1642cf9-4e5f-4de1-a2b2-859dde7f8b38'] | 2025-04-05 13:01:40.335729 | orchestrator | | status | ACTIVE | 2025-04-05 13:01:40.335745 | orchestrator | | tags | test | 2025-04-05 13:01:40.335760 | orchestrator | | trusted_image_certificates | None | 2025-04-05 13:01:40.335775 | orchestrator | | updated | 2025-04-05T13:00:25Z | 2025-04-05 13:01:40.335789 | orchestrator | | user_id | c84fa5d5301b4b688c4a8fb04f1ce27b | 2025-04-05 13:01:40.335808 | orchestrator | | volumes_attached | | 2025-04-05 13:01:40.341712 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-04-05 13:01:40.641585 | orchestrator | + openstack --os-cloud test server show test-3 2025-04-05 13:01:44.007556 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-04-05 13:01:44.007670 | orchestrator | | Field | Value | 2025-04-05 13:01:44.007690 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-04-05 13:01:44.007720 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-04-05 13:01:44.007756 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-04-05 13:01:44.007771 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-04-05 13:01:44.007786 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-04-05 13:01:44.007800 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-04-05 13:01:44.007814 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-04-05 13:01:44.007828 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-04-05 13:01:44.007842 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-04-05 13:01:44.007868 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-04-05 13:01:44.007883 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-04-05 13:01:44.007897 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-04-05 13:01:44.007976 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-04-05 13:01:44.007993 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-04-05 13:01:44.008007 | orchestrator | | OS-EXT-STS:task_state | None | 2025-04-05 13:01:44.008021 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-04-05 13:01:44.008035 | orchestrator | | OS-SRV-USG:launched_at | 2025-04-05T12:59:29.000000 | 2025-04-05 13:01:44.008049 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-04-05 13:01:44.008063 | orchestrator | | accessIPv4 | | 2025-04-05 13:01:44.008077 | orchestrator | | accessIPv6 | | 2025-04-05 13:01:44.008093 | orchestrator | | addresses | auto_allocated_network=10.42.0.36, 192.168.112.106 | 2025-04-05 13:01:44.008116 | orchestrator | | config_drive | | 2025-04-05 13:01:44.008138 | orchestrator | | created | 2025-04-05T12:59:22Z | 2025-04-05 13:01:44.008163 | orchestrator | | description | None | 2025-04-05 13:01:44.008181 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-04-05 13:01:44.008197 | orchestrator | | hostId | b8a36f8772853a7276a60d8b6842574aca8bcf940b778dd1cf2c67b9 | 2025-04-05 13:01:44.008214 | orchestrator | | host_status | None | 2025-04-05 13:01:44.008230 | orchestrator | | id | 5e4fa5b7-fccc-4123-b5dc-4037dc8c48ce | 2025-04-05 13:01:44.008247 | orchestrator | | image | Cirros 0.6.2 (b962deaa-948a-4930-8038-82279d78067e) | 2025-04-05 13:01:44.008263 | orchestrator | | key_name | test | 2025-04-05 13:01:44.008280 | orchestrator | | locked | False | 2025-04-05 13:01:44.008297 | orchestrator | | locked_reason | None | 2025-04-05 13:01:44.008314 | orchestrator | | name | test-3 | 2025-04-05 13:01:44.008341 | orchestrator | | progress | 0 | 2025-04-05 13:01:44.008366 | orchestrator | | project_id | e5bb4cf6179d464b9f1fe45602a3a22f | 2025-04-05 13:01:44.008383 | orchestrator | | properties | hostname='test-3' | 2025-04-05 13:01:44.008399 | orchestrator | | security_groups | name='icmp' | 2025-04-05 13:01:44.008416 | orchestrator | | | name='ssh' | 2025-04-05 13:01:44.008432 | orchestrator | | server_groups | ['b1642cf9-4e5f-4de1-a2b2-859dde7f8b38'] | 2025-04-05 13:01:44.008448 | orchestrator | | status | ACTIVE | 2025-04-05 13:01:44.008462 | orchestrator | | tags | test | 2025-04-05 13:01:44.008476 | orchestrator | | trusted_image_certificates | None | 2025-04-05 13:01:44.008490 | orchestrator | | updated | 2025-04-05T13:00:29Z | 2025-04-05 13:01:44.008509 | orchestrator | | user_id | c84fa5d5301b4b688c4a8fb04f1ce27b | 2025-04-05 13:01:44.008529 | orchestrator | | volumes_attached | | 2025-04-05 13:01:44.013803 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-04-05 13:01:44.254635 | orchestrator | + openstack --os-cloud test server show test-4 2025-04-05 13:01:47.724727 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-04-05 13:01:47.724829 | orchestrator | | Field | Value | 2025-04-05 13:01:47.724847 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-04-05 13:01:47.724861 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-04-05 13:01:47.724876 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-04-05 13:01:47.724891 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-04-05 13:01:47.724905 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-04-05 13:01:47.724949 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-04-05 13:01:47.724981 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-04-05 13:01:47.725018 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-04-05 13:01:47.725033 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-04-05 13:01:47.725059 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-04-05 13:01:47.725074 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-04-05 13:01:47.725089 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-04-05 13:01:47.725103 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-04-05 13:01:47.725117 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-04-05 13:01:47.725131 | orchestrator | | OS-EXT-STS:task_state | None | 2025-04-05 13:01:47.725145 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-04-05 13:01:47.725164 | orchestrator | | OS-SRV-USG:launched_at | 2025-04-05T12:59:57.000000 | 2025-04-05 13:01:47.725178 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-04-05 13:01:47.725200 | orchestrator | | accessIPv4 | | 2025-04-05 13:01:47.725214 | orchestrator | | accessIPv6 | | 2025-04-05 13:01:47.725228 | orchestrator | | addresses | auto_allocated_network=10.42.0.19, 192.168.112.154 | 2025-04-05 13:01:47.725249 | orchestrator | | config_drive | | 2025-04-05 13:01:47.725265 | orchestrator | | created | 2025-04-05T12:59:50Z | 2025-04-05 13:01:47.725282 | orchestrator | | description | None | 2025-04-05 13:01:47.725299 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-04-05 13:01:47.725315 | orchestrator | | hostId | 796d33097c2276375f0354e277e8b0e4b3c1a4eff9ff6397c0259c3b | 2025-04-05 13:01:47.725331 | orchestrator | | host_status | None | 2025-04-05 13:01:47.725359 | orchestrator | | id | 76a0a4bf-1d15-4f3d-9ab9-8d46fc609d83 | 2025-04-05 13:01:47.725375 | orchestrator | | image | Cirros 0.6.2 (b962deaa-948a-4930-8038-82279d78067e) | 2025-04-05 13:01:47.725399 | orchestrator | | key_name | test | 2025-04-05 13:01:47.725414 | orchestrator | | locked | False | 2025-04-05 13:01:47.725431 | orchestrator | | locked_reason | None | 2025-04-05 13:01:47.725448 | orchestrator | | name | test-4 | 2025-04-05 13:01:47.725469 | orchestrator | | progress | 0 | 2025-04-05 13:01:47.725486 | orchestrator | | project_id | e5bb4cf6179d464b9f1fe45602a3a22f | 2025-04-05 13:01:47.725502 | orchestrator | | properties | hostname='test-4' | 2025-04-05 13:01:47.725518 | orchestrator | | security_groups | name='icmp' | 2025-04-05 13:01:47.725539 | orchestrator | | | name='ssh' | 2025-04-05 13:01:47.725555 | orchestrator | | server_groups | ['b1642cf9-4e5f-4de1-a2b2-859dde7f8b38'] | 2025-04-05 13:01:47.725571 | orchestrator | | status | ACTIVE | 2025-04-05 13:01:47.725595 | orchestrator | | tags | test | 2025-04-05 13:01:47.725611 | orchestrator | | trusted_image_certificates | None | 2025-04-05 13:01:47.725626 | orchestrator | | updated | 2025-04-05T13:00:34Z | 2025-04-05 13:01:47.725640 | orchestrator | | user_id | c84fa5d5301b4b688c4a8fb04f1ce27b | 2025-04-05 13:01:47.725660 | orchestrator | | volumes_attached | | 2025-04-05 13:01:47.732410 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-04-05 13:01:48.011035 | orchestrator | + server_ping 2025-04-05 13:01:48.012088 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-04-05 13:01:48.012305 | orchestrator | ++ tr -d '\r' 2025-04-05 13:01:50.713445 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:01:50.725043 | orchestrator | + ping -c3 192.168.112.192 2025-04-05 13:01:50.725066 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-04-05 13:01:51.724163 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=4.22 ms 2025-04-05 13:01:51.724307 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=1.56 ms 2025-04-05 13:01:52.725671 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.21 ms 2025-04-05 13:01:52.727373 | orchestrator | 2025-04-05 13:01:52.727458 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-04-05 13:01:52.727478 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-04-05 13:01:52.727493 | orchestrator | rtt min/avg/max/mdev = 1.205/2.328/4.220/1.345 ms 2025-04-05 13:01:52.727521 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:01:52.739540 | orchestrator | + ping -c3 192.168.112.143 2025-04-05 13:01:52.739575 | orchestrator | PING 192.168.112.143 (192.168.112.143) 56(84) bytes of data. 2025-04-05 13:01:53.735796 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=1 ttl=63 time=7.82 ms 2025-04-05 13:01:53.735987 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=2 ttl=63 time=1.69 ms 2025-04-05 13:01:54.737652 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=3 ttl=63 time=1.28 ms 2025-04-05 13:01:54.738283 | orchestrator | 2025-04-05 13:01:54.738325 | orchestrator | --- 192.168.112.143 ping statistics --- 2025-04-05 13:01:54.738344 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-04-05 13:01:54.738361 | orchestrator | rtt min/avg/max/mdev = 1.275/3.593/7.816/2.990 ms 2025-04-05 13:01:54.738384 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:01:54.746715 | orchestrator | + ping -c3 192.168.112.106 2025-04-05 13:01:54.746751 | orchestrator | PING 192.168.112.106 (192.168.112.106) 56(84) bytes of data. 2025-04-05 13:01:55.745781 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=1 ttl=63 time=4.28 ms 2025-04-05 13:01:55.745890 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=2 ttl=63 time=1.40 ms 2025-04-05 13:01:56.747684 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=3 ttl=63 time=1.39 ms 2025-04-05 13:01:56.748386 | orchestrator | 2025-04-05 13:01:56.748420 | orchestrator | --- 192.168.112.106 ping statistics --- 2025-04-05 13:01:56.748438 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-04-05 13:01:56.748455 | orchestrator | rtt min/avg/max/mdev = 1.388/2.355/4.279/1.360 ms 2025-04-05 13:01:56.748478 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:01:56.756291 | orchestrator | + ping -c3 192.168.112.134 2025-04-05 13:01:56.756329 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2025-04-05 13:01:57.756573 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=3.62 ms 2025-04-05 13:01:57.756696 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=1.58 ms 2025-04-05 13:01:58.758659 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=1.22 ms 2025-04-05 13:01:58.766006 | orchestrator | 2025-04-05 13:01:58.766084 | orchestrator | --- 192.168.112.134 ping statistics --- 2025-04-05 13:01:58.766102 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-04-05 13:01:58.766116 | orchestrator | rtt min/avg/max/mdev = 1.216/2.139/3.621/1.058 ms 2025-04-05 13:01:58.766131 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:01:58.766146 | orchestrator | + ping -c3 192.168.112.154 2025-04-05 13:01:58.766168 | orchestrator | PING 192.168.112.154 (192.168.112.154) 56(84) bytes of data. 2025-04-05 13:01:59.765091 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=1 ttl=63 time=4.47 ms 2025-04-05 13:01:59.765182 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=2 ttl=63 time=1.63 ms 2025-04-05 13:02:00.766693 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=3 ttl=63 time=1.15 ms 2025-04-05 13:02:00.767789 | orchestrator | 2025-04-05 13:02:00.767827 | orchestrator | --- 192.168.112.154 ping statistics --- 2025-04-05 13:02:00.767846 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-04-05 13:02:00.767862 | orchestrator | rtt min/avg/max/mdev = 1.153/2.417/4.466/1.461 ms 2025-04-05 13:02:00.767886 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-04-05 13:02:03.797214 | orchestrator | + compute_list 2025-04-05 13:02:03.797324 | orchestrator | + osism manage compute list testbed-node-3 2025-04-05 13:02:03.797361 | orchestrator | +--------------------------------------+--------+----------+ 2025-04-05 13:02:04.028546 | orchestrator | | ID | Name | Status | 2025-04-05 13:02:04.028626 | orchestrator | |--------------------------------------+--------+----------| 2025-04-05 13:02:04.028642 | orchestrator | | 33fc9d7f-d353-49d1-8762-f612f329d00b | test-1 | ACTIVE | 2025-04-05 13:02:04.028657 | orchestrator | +--------------------------------------+--------+----------+ 2025-04-05 13:02:04.028685 | orchestrator | + osism manage compute list testbed-node-4 2025-04-05 13:02:06.924260 | orchestrator | +--------------------------------------+--------+----------+ 2025-04-05 13:02:07.138689 | orchestrator | | ID | Name | Status | 2025-04-05 13:02:07.138798 | orchestrator | |--------------------------------------+--------+----------| 2025-04-05 13:02:07.138815 | orchestrator | | 5e4fa5b7-fccc-4123-b5dc-4037dc8c48ce | test-3 | ACTIVE | 2025-04-05 13:02:07.138830 | orchestrator | | 342042fe-ea3c-4465-bb95-d21f91d37bda | test | ACTIVE | 2025-04-05 13:02:07.138844 | orchestrator | +--------------------------------------+--------+----------+ 2025-04-05 13:02:07.138895 | orchestrator | + osism manage compute list testbed-node-5 2025-04-05 13:02:09.957803 | orchestrator | +--------------------------------------+--------+----------+ 2025-04-05 13:02:10.232634 | orchestrator | | ID | Name | Status | 2025-04-05 13:02:10.232708 | orchestrator | |--------------------------------------+--------+----------| 2025-04-05 13:02:10.232726 | orchestrator | | 76a0a4bf-1d15-4f3d-9ab9-8d46fc609d83 | test-4 | ACTIVE | 2025-04-05 13:02:10.232742 | orchestrator | | b9fefd74-1db8-4d09-b18f-ee1cbb4e5473 | test-2 | ACTIVE | 2025-04-05 13:02:10.232757 | orchestrator | +--------------------------------------+--------+----------+ 2025-04-05 13:02:10.232785 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-04-05 13:02:12.930239 | orchestrator | 2025-04-05 13:02:12 | INFO  | Live migrating server 5e4fa5b7-fccc-4123-b5dc-4037dc8c48ce 2025-04-05 13:02:19.873929 | orchestrator | 2025-04-05 13:02:19 | INFO  | Live migration of 5e4fa5b7-fccc-4123-b5dc-4037dc8c48ce (test-3) is still in progress 2025-04-05 13:02:22.249508 | orchestrator | 2025-04-05 13:02:22 | INFO  | Live migration of 5e4fa5b7-fccc-4123-b5dc-4037dc8c48ce (test-3) is still in progress 2025-04-05 13:02:24.598304 | orchestrator | 2025-04-05 13:02:24 | INFO  | Live migrating server 342042fe-ea3c-4465-bb95-d21f91d37bda 2025-04-05 13:02:30.282881 | orchestrator | 2025-04-05 13:02:30 | INFO  | Live migration of 342042fe-ea3c-4465-bb95-d21f91d37bda (test) is still in progress 2025-04-05 13:02:32.536877 | orchestrator | 2025-04-05 13:02:32 | INFO  | Live migration of 342042fe-ea3c-4465-bb95-d21f91d37bda (test) is still in progress 2025-04-05 13:02:34.913948 | orchestrator | 2025-04-05 13:02:34 | INFO  | Live migration of 342042fe-ea3c-4465-bb95-d21f91d37bda (test) is still in progress 2025-04-05 13:02:37.404640 | orchestrator | + compute_list 2025-04-05 13:02:40.006929 | orchestrator | + osism manage compute list testbed-node-3 2025-04-05 13:02:40.007082 | orchestrator | +--------------------------------------+--------+----------+ 2025-04-05 13:02:40.231178 | orchestrator | | ID | Name | Status | 2025-04-05 13:02:40.231258 | orchestrator | |--------------------------------------+--------+----------| 2025-04-05 13:02:40.231274 | orchestrator | | 5e4fa5b7-fccc-4123-b5dc-4037dc8c48ce | test-3 | ACTIVE | 2025-04-05 13:02:40.231288 | orchestrator | | 33fc9d7f-d353-49d1-8762-f612f329d00b | test-1 | ACTIVE | 2025-04-05 13:02:40.231303 | orchestrator | | 342042fe-ea3c-4465-bb95-d21f91d37bda | test | ACTIVE | 2025-04-05 13:02:40.231317 | orchestrator | +--------------------------------------+--------+----------+ 2025-04-05 13:02:40.231346 | orchestrator | + osism manage compute list testbed-node-4 2025-04-05 13:02:42.609046 | orchestrator | +------+--------+----------+ 2025-04-05 13:02:42.834271 | orchestrator | | ID | Name | Status | 2025-04-05 13:02:42.834371 | orchestrator | |------+--------+----------| 2025-04-05 13:02:42.834389 | orchestrator | +------+--------+----------+ 2025-04-05 13:02:42.834420 | orchestrator | + osism manage compute list testbed-node-5 2025-04-05 13:02:45.535964 | orchestrator | +--------------------------------------+--------+----------+ 2025-04-05 13:02:45.814177 | orchestrator | | ID | Name | Status | 2025-04-05 13:02:45.814255 | orchestrator | |--------------------------------------+--------+----------| 2025-04-05 13:02:45.814271 | orchestrator | | 76a0a4bf-1d15-4f3d-9ab9-8d46fc609d83 | test-4 | ACTIVE | 2025-04-05 13:02:45.814286 | orchestrator | | b9fefd74-1db8-4d09-b18f-ee1cbb4e5473 | test-2 | ACTIVE | 2025-04-05 13:02:45.814301 | orchestrator | +--------------------------------------+--------+----------+ 2025-04-05 13:02:45.814329 | orchestrator | + server_ping 2025-04-05 13:02:45.815409 | orchestrator | ++ tr -d '\r' 2025-04-05 13:02:48.556098 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-04-05 13:02:48.556244 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:02:48.565383 | orchestrator | + ping -c3 192.168.112.192 2025-04-05 13:02:48.565427 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-04-05 13:02:49.562451 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=6.44 ms 2025-04-05 13:02:49.562618 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=1.52 ms 2025-04-05 13:02:50.564587 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.35 ms 2025-04-05 13:02:50.565247 | orchestrator | 2025-04-05 13:02:50.565283 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-04-05 13:02:50.565301 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-04-05 13:02:50.565318 | orchestrator | rtt min/avg/max/mdev = 1.352/3.102/6.440/2.360 ms 2025-04-05 13:02:50.565342 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:02:50.575088 | orchestrator | + ping -c3 192.168.112.143 2025-04-05 13:02:50.575217 | orchestrator | PING 192.168.112.143 (192.168.112.143) 56(84) bytes of data. 2025-04-05 13:02:51.572805 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=1 ttl=63 time=5.60 ms 2025-04-05 13:02:51.572979 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=2 ttl=63 time=1.52 ms 2025-04-05 13:02:52.575282 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=3 ttl=63 time=1.51 ms 2025-04-05 13:02:52.586347 | orchestrator | 2025-04-05 13:02:52.586382 | orchestrator | --- 192.168.112.143 ping statistics --- 2025-04-05 13:02:52.586399 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-04-05 13:02:52.586413 | orchestrator | rtt min/avg/max/mdev = 1.513/2.877/5.601/1.926 ms 2025-04-05 13:02:52.586427 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:02:52.586442 | orchestrator | + ping -c3 192.168.112.106 2025-04-05 13:02:52.586464 | orchestrator | PING 192.168.112.106 (192.168.112.106) 56(84) bytes of data. 2025-04-05 13:02:53.583549 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=1 ttl=63 time=6.66 ms 2025-04-05 13:02:53.583654 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=2 ttl=63 time=1.71 ms 2025-04-05 13:02:54.585563 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=3 ttl=63 time=1.43 ms 2025-04-05 13:02:54.586129 | orchestrator | 2025-04-05 13:02:54.586163 | orchestrator | --- 192.168.112.106 ping statistics --- 2025-04-05 13:02:54.586178 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-04-05 13:02:54.586193 | orchestrator | rtt min/avg/max/mdev = 1.433/3.266/6.659/2.401 ms 2025-04-05 13:02:54.586235 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:02:54.594285 | orchestrator | + ping -c3 192.168.112.134 2025-04-05 13:02:54.594318 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2025-04-05 13:02:55.594340 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=3.70 ms 2025-04-05 13:02:55.594538 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=1.52 ms 2025-04-05 13:02:56.595566 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=1.19 ms 2025-04-05 13:02:56.596111 | orchestrator | 2025-04-05 13:02:56.596145 | orchestrator | --- 192.168.112.134 ping statistics --- 2025-04-05 13:02:56.596161 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-04-05 13:02:56.596175 | orchestrator | rtt min/avg/max/mdev = 1.194/2.139/3.702/1.113 ms 2025-04-05 13:02:56.596196 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:02:56.606161 | orchestrator | + ping -c3 192.168.112.154 2025-04-05 13:02:56.606196 | orchestrator | PING 192.168.112.154 (192.168.112.154) 56(84) bytes of data. 2025-04-05 13:02:57.602441 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=1 ttl=63 time=6.96 ms 2025-04-05 13:02:57.602545 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=2 ttl=63 time=1.82 ms 2025-04-05 13:02:58.604586 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=3 ttl=63 time=1.36 ms 2025-04-05 13:03:01.237343 | orchestrator | 2025-04-05 13:03:01.237481 | orchestrator | --- 192.168.112.154 ping statistics --- 2025-04-05 13:03:01.238106 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-04-05 13:03:01.238134 | orchestrator | rtt min/avg/max/mdev = 1.358/3.377/6.955/2.536 ms 2025-04-05 13:03:01.238151 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-04-05 13:03:01.238187 | orchestrator | 2025-04-05 13:03:01 | INFO  | Live migrating server 76a0a4bf-1d15-4f3d-9ab9-8d46fc609d83 2025-04-05 13:03:08.772837 | orchestrator | 2025-04-05 13:03:08 | INFO  | Live migration of 76a0a4bf-1d15-4f3d-9ab9-8d46fc609d83 (test-4) is still in progress 2025-04-05 13:03:11.068185 | orchestrator | 2025-04-05 13:03:11 | INFO  | Live migration of 76a0a4bf-1d15-4f3d-9ab9-8d46fc609d83 (test-4) is still in progress 2025-04-05 13:03:13.405586 | orchestrator | 2025-04-05 13:03:13 | INFO  | Live migrating server b9fefd74-1db8-4d09-b18f-ee1cbb4e5473 2025-04-05 13:03:19.697695 | orchestrator | 2025-04-05 13:03:19 | INFO  | Live migration of b9fefd74-1db8-4d09-b18f-ee1cbb4e5473 (test-2) is still in progress 2025-04-05 13:03:21.932582 | orchestrator | 2025-04-05 13:03:21 | INFO  | Live migration of b9fefd74-1db8-4d09-b18f-ee1cbb4e5473 (test-2) is still in progress 2025-04-05 13:03:24.554414 | orchestrator | + compute_list 2025-04-05 13:03:27.156770 | orchestrator | + osism manage compute list testbed-node-3 2025-04-05 13:03:27.156923 | orchestrator | +--------------------------------------+--------+----------+ 2025-04-05 13:03:27.370102 | orchestrator | | ID | Name | Status | 2025-04-05 13:03:27.370149 | orchestrator | |--------------------------------------+--------+----------| 2025-04-05 13:03:27.370165 | orchestrator | | 76a0a4bf-1d15-4f3d-9ab9-8d46fc609d83 | test-4 | ACTIVE | 2025-04-05 13:03:27.370181 | orchestrator | | 5e4fa5b7-fccc-4123-b5dc-4037dc8c48ce | test-3 | ACTIVE | 2025-04-05 13:03:27.370196 | orchestrator | | b9fefd74-1db8-4d09-b18f-ee1cbb4e5473 | test-2 | ACTIVE | 2025-04-05 13:03:27.370211 | orchestrator | | 33fc9d7f-d353-49d1-8762-f612f329d00b | test-1 | ACTIVE | 2025-04-05 13:03:27.370227 | orchestrator | | 342042fe-ea3c-4465-bb95-d21f91d37bda | test | ACTIVE | 2025-04-05 13:03:27.370243 | orchestrator | +--------------------------------------+--------+----------+ 2025-04-05 13:03:27.370267 | orchestrator | + osism manage compute list testbed-node-4 2025-04-05 13:03:29.750560 | orchestrator | +------+--------+----------+ 2025-04-05 13:03:29.971319 | orchestrator | | ID | Name | Status | 2025-04-05 13:03:29.971389 | orchestrator | |------+--------+----------| 2025-04-05 13:03:29.971404 | orchestrator | +------+--------+----------+ 2025-04-05 13:03:29.971431 | orchestrator | + osism manage compute list testbed-node-5 2025-04-05 13:03:32.303838 | orchestrator | +------+--------+----------+ 2025-04-05 13:03:32.545770 | orchestrator | | ID | Name | Status | 2025-04-05 13:03:32.545859 | orchestrator | |------+--------+----------| 2025-04-05 13:03:32.545919 | orchestrator | +------+--------+----------+ 2025-04-05 13:03:32.545949 | orchestrator | + server_ping 2025-04-05 13:03:32.549516 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-04-05 13:03:35.106176 | orchestrator | ++ tr -d '\r' 2025-04-05 13:03:35.106306 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:03:35.113356 | orchestrator | + ping -c3 192.168.112.192 2025-04-05 13:03:35.113502 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-04-05 13:03:36.113282 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=3.11 ms 2025-04-05 13:03:36.113412 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=1.66 ms 2025-04-05 13:03:37.115283 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.44 ms 2025-04-05 13:03:37.115982 | orchestrator | 2025-04-05 13:03:37.116017 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-04-05 13:03:37.116035 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-04-05 13:03:37.116050 | orchestrator | rtt min/avg/max/mdev = 1.438/2.070/3.111/0.741 ms 2025-04-05 13:03:37.116072 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:03:37.128224 | orchestrator | + ping -c3 192.168.112.143 2025-04-05 13:03:37.128271 | orchestrator | PING 192.168.112.143 (192.168.112.143) 56(84) bytes of data. 2025-04-05 13:03:38.123497 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=1 ttl=63 time=8.64 ms 2025-04-05 13:03:38.123641 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=2 ttl=63 time=1.59 ms 2025-04-05 13:03:39.125179 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=3 ttl=63 time=1.36 ms 2025-04-05 13:03:39.126261 | orchestrator | 2025-04-05 13:03:39.126295 | orchestrator | --- 192.168.112.143 ping statistics --- 2025-04-05 13:03:39.126339 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-04-05 13:03:39.126355 | orchestrator | rtt min/avg/max/mdev = 1.355/3.861/8.640/3.380 ms 2025-04-05 13:03:39.126378 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:03:39.136177 | orchestrator | + ping -c3 192.168.112.106 2025-04-05 13:03:39.136210 | orchestrator | PING 192.168.112.106 (192.168.112.106) 56(84) bytes of data. 2025-04-05 13:03:40.134590 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=1 ttl=63 time=5.50 ms 2025-04-05 13:03:40.134717 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=2 ttl=63 time=1.83 ms 2025-04-05 13:03:41.135337 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=3 ttl=63 time=1.26 ms 2025-04-05 13:03:41.135827 | orchestrator | 2025-04-05 13:03:41.135859 | orchestrator | --- 192.168.112.106 ping statistics --- 2025-04-05 13:03:41.135906 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-04-05 13:03:41.135922 | orchestrator | rtt min/avg/max/mdev = 1.263/2.862/5.500/1.878 ms 2025-04-05 13:03:41.135943 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:03:41.142799 | orchestrator | + ping -c3 192.168.112.134 2025-04-05 13:03:41.142832 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2025-04-05 13:03:42.142832 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=3.02 ms 2025-04-05 13:03:42.142962 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=1.62 ms 2025-04-05 13:03:43.144367 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=1.02 ms 2025-04-05 13:03:43.145107 | orchestrator | 2025-04-05 13:03:43.145147 | orchestrator | --- 192.168.112.134 ping statistics --- 2025-04-05 13:03:43.145167 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-04-05 13:03:43.145200 | orchestrator | rtt min/avg/max/mdev = 1.015/1.886/3.022/0.840 ms 2025-04-05 13:03:43.145226 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:03:43.152231 | orchestrator | + ping -c3 192.168.112.154 2025-04-05 13:03:43.152264 | orchestrator | PING 192.168.112.154 (192.168.112.154) 56(84) bytes of data. 2025-04-05 13:03:44.152252 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=1 ttl=63 time=3.66 ms 2025-04-05 13:03:44.152371 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=2 ttl=63 time=1.44 ms 2025-04-05 13:03:45.154249 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=3 ttl=63 time=1.29 ms 2025-04-05 13:03:45.154897 | orchestrator | 2025-04-05 13:03:45.154984 | orchestrator | --- 192.168.112.154 ping statistics --- 2025-04-05 13:03:45.155004 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-04-05 13:03:45.155022 | orchestrator | rtt min/avg/max/mdev = 1.285/2.126/3.658/1.084 ms 2025-04-05 13:03:45.155050 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-04-05 13:03:48.173959 | orchestrator | 2025-04-05 13:03:48 | INFO  | Live migrating server 76a0a4bf-1d15-4f3d-9ab9-8d46fc609d83 2025-04-05 13:03:53.442553 | orchestrator | 2025-04-05 13:03:53 | INFO  | Live migration of 76a0a4bf-1d15-4f3d-9ab9-8d46fc609d83 (test-4) is still in progress 2025-04-05 13:03:55.770978 | orchestrator | 2025-04-05 13:03:55 | INFO  | Live migration of 76a0a4bf-1d15-4f3d-9ab9-8d46fc609d83 (test-4) is still in progress 2025-04-05 13:03:58.107252 | orchestrator | 2025-04-05 13:03:58 | INFO  | Live migrating server 5e4fa5b7-fccc-4123-b5dc-4037dc8c48ce 2025-04-05 13:04:02.986702 | orchestrator | 2025-04-05 13:04:02 | INFO  | Live migration of 5e4fa5b7-fccc-4123-b5dc-4037dc8c48ce (test-3) is still in progress 2025-04-05 13:04:05.349677 | orchestrator | 2025-04-05 13:04:05 | INFO  | Live migration of 5e4fa5b7-fccc-4123-b5dc-4037dc8c48ce (test-3) is still in progress 2025-04-05 13:04:07.571705 | orchestrator | 2025-04-05 13:04:07 | INFO  | Live migrating server b9fefd74-1db8-4d09-b18f-ee1cbb4e5473 2025-04-05 13:04:11.405047 | orchestrator | 2025-04-05 13:04:11 | INFO  | Live migration of b9fefd74-1db8-4d09-b18f-ee1cbb4e5473 (test-2) is still in progress 2025-04-05 13:04:13.930941 | orchestrator | 2025-04-05 13:04:13 | INFO  | Live migration of b9fefd74-1db8-4d09-b18f-ee1cbb4e5473 (test-2) is still in progress 2025-04-05 13:04:16.169593 | orchestrator | 2025-04-05 13:04:16 | INFO  | Live migrating server 33fc9d7f-d353-49d1-8762-f612f329d00b 2025-04-05 13:04:20.511694 | orchestrator | 2025-04-05 13:04:20 | INFO  | Live migration of 33fc9d7f-d353-49d1-8762-f612f329d00b (test-1) is still in progress 2025-04-05 13:04:22.707736 | orchestrator | 2025-04-05 13:04:22 | INFO  | Live migration of 33fc9d7f-d353-49d1-8762-f612f329d00b (test-1) is still in progress 2025-04-05 13:04:25.055081 | orchestrator | 2025-04-05 13:04:25 | INFO  | Live migrating server 342042fe-ea3c-4465-bb95-d21f91d37bda 2025-04-05 13:04:30.319926 | orchestrator | 2025-04-05 13:04:30 | INFO  | Live migration of 342042fe-ea3c-4465-bb95-d21f91d37bda (test) is still in progress 2025-04-05 13:04:32.569671 | orchestrator | 2025-04-05 13:04:32 | INFO  | Live migration of 342042fe-ea3c-4465-bb95-d21f91d37bda (test) is still in progress 2025-04-05 13:04:34.891127 | orchestrator | 2025-04-05 13:04:34 | INFO  | Live migration of 342042fe-ea3c-4465-bb95-d21f91d37bda (test) is still in progress 2025-04-05 13:04:37.454258 | orchestrator | + compute_list 2025-04-05 13:04:39.759337 | orchestrator | + osism manage compute list testbed-node-3 2025-04-05 13:04:39.759407 | orchestrator | +------+--------+----------+ 2025-04-05 13:04:39.973920 | orchestrator | | ID | Name | Status | 2025-04-05 13:04:39.973974 | orchestrator | |------+--------+----------| 2025-04-05 13:04:39.973990 | orchestrator | +------+--------+----------+ 2025-04-05 13:04:39.974066 | orchestrator | + osism manage compute list testbed-node-4 2025-04-05 13:04:42.838633 | orchestrator | +--------------------------------------+--------+----------+ 2025-04-05 13:04:43.095303 | orchestrator | | ID | Name | Status | 2025-04-05 13:04:43.095394 | orchestrator | |--------------------------------------+--------+----------| 2025-04-05 13:04:43.095411 | orchestrator | | 76a0a4bf-1d15-4f3d-9ab9-8d46fc609d83 | test-4 | ACTIVE | 2025-04-05 13:04:43.095424 | orchestrator | | 5e4fa5b7-fccc-4123-b5dc-4037dc8c48ce | test-3 | ACTIVE | 2025-04-05 13:04:43.095437 | orchestrator | | b9fefd74-1db8-4d09-b18f-ee1cbb4e5473 | test-2 | ACTIVE | 2025-04-05 13:04:43.095449 | orchestrator | | 33fc9d7f-d353-49d1-8762-f612f329d00b | test-1 | ACTIVE | 2025-04-05 13:04:43.095462 | orchestrator | | 342042fe-ea3c-4465-bb95-d21f91d37bda | test | ACTIVE | 2025-04-05 13:04:43.095475 | orchestrator | +--------------------------------------+--------+----------+ 2025-04-05 13:04:43.095501 | orchestrator | + osism manage compute list testbed-node-5 2025-04-05 13:04:45.466539 | orchestrator | +------+--------+----------+ 2025-04-05 13:04:45.678655 | orchestrator | | ID | Name | Status | 2025-04-05 13:04:45.678729 | orchestrator | |------+--------+----------| 2025-04-05 13:04:45.678745 | orchestrator | +------+--------+----------+ 2025-04-05 13:04:45.678773 | orchestrator | + server_ping 2025-04-05 13:04:45.679770 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-04-05 13:04:45.680063 | orchestrator | ++ tr -d '\r' 2025-04-05 13:04:48.428332 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:04:48.439642 | orchestrator | + ping -c3 192.168.112.192 2025-04-05 13:04:48.439691 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-04-05 13:04:49.436373 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=7.77 ms 2025-04-05 13:04:49.436487 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=1.63 ms 2025-04-05 13:04:50.437394 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.11 ms 2025-04-05 13:04:50.445704 | orchestrator | 2025-04-05 13:04:50.445736 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-04-05 13:04:50.445753 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-04-05 13:04:50.445768 | orchestrator | rtt min/avg/max/mdev = 1.105/3.503/7.772/3.026 ms 2025-04-05 13:04:50.445783 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:04:50.445824 | orchestrator | + ping -c3 192.168.112.143 2025-04-05 13:04:50.445892 | orchestrator | PING 192.168.112.143 (192.168.112.143) 56(84) bytes of data. 2025-04-05 13:04:51.444110 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=1 ttl=63 time=5.30 ms 2025-04-05 13:04:51.444234 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=2 ttl=63 time=1.84 ms 2025-04-05 13:04:52.446498 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=3 ttl=63 time=2.25 ms 2025-04-05 13:04:52.447069 | orchestrator | 2025-04-05 13:04:52.447097 | orchestrator | --- 192.168.112.143 ping statistics --- 2025-04-05 13:04:52.447114 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-04-05 13:04:52.447129 | orchestrator | rtt min/avg/max/mdev = 1.837/3.129/5.301/1.544 ms 2025-04-05 13:04:52.447149 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:04:52.456294 | orchestrator | + ping -c3 192.168.112.106 2025-04-05 13:04:52.456327 | orchestrator | PING 192.168.112.106 (192.168.112.106) 56(84) bytes of data. 2025-04-05 13:04:53.453927 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=1 ttl=63 time=4.98 ms 2025-04-05 13:04:53.454106 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=2 ttl=63 time=1.39 ms 2025-04-05 13:04:54.456504 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=3 ttl=63 time=2.09 ms 2025-04-05 13:04:54.466979 | orchestrator | 2025-04-05 13:04:54.467013 | orchestrator | --- 192.168.112.106 ping statistics --- 2025-04-05 13:04:54.467048 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-04-05 13:04:54.467063 | orchestrator | rtt min/avg/max/mdev = 1.386/2.819/4.979/1.553 ms 2025-04-05 13:04:54.467078 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:04:54.467092 | orchestrator | + ping -c3 192.168.112.134 2025-04-05 13:04:54.467115 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2025-04-05 13:04:55.465550 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=5.52 ms 2025-04-05 13:04:55.465668 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=2.08 ms 2025-04-05 13:04:56.467458 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=2.21 ms 2025-04-05 13:04:56.468029 | orchestrator | 2025-04-05 13:04:56.468060 | orchestrator | --- 192.168.112.134 ping statistics --- 2025-04-05 13:04:56.468076 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-04-05 13:04:56.468090 | orchestrator | rtt min/avg/max/mdev = 2.078/3.268/5.515/1.589 ms 2025-04-05 13:04:56.468110 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:04:56.478225 | orchestrator | + ping -c3 192.168.112.154 2025-04-05 13:04:56.478256 | orchestrator | PING 192.168.112.154 (192.168.112.154) 56(84) bytes of data. 2025-04-05 13:04:57.475435 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=1 ttl=63 time=6.02 ms 2025-04-05 13:04:57.475570 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=2 ttl=63 time=1.74 ms 2025-04-05 13:04:58.477292 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=3 ttl=63 time=1.62 ms 2025-04-05 13:04:58.478134 | orchestrator | 2025-04-05 13:04:58.478187 | orchestrator | --- 192.168.112.154 ping statistics --- 2025-04-05 13:04:58.478206 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-04-05 13:04:58.478224 | orchestrator | rtt min/avg/max/mdev = 1.622/3.126/6.019/2.046 ms 2025-04-05 13:04:58.478248 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-04-05 13:05:01.474774 | orchestrator | 2025-04-05 13:05:01 | INFO  | Live migrating server 76a0a4bf-1d15-4f3d-9ab9-8d46fc609d83 2025-04-05 13:05:06.960129 | orchestrator | 2025-04-05 13:05:06 | INFO  | Live migration of 76a0a4bf-1d15-4f3d-9ab9-8d46fc609d83 (test-4) is still in progress 2025-04-05 13:05:09.315992 | orchestrator | 2025-04-05 13:05:09 | INFO  | Live migration of 76a0a4bf-1d15-4f3d-9ab9-8d46fc609d83 (test-4) is still in progress 2025-04-05 13:05:11.642613 | orchestrator | 2025-04-05 13:05:11 | INFO  | Live migrating server 5e4fa5b7-fccc-4123-b5dc-4037dc8c48ce 2025-04-05 13:05:18.282765 | orchestrator | 2025-04-05 13:05:18 | INFO  | Live migration of 5e4fa5b7-fccc-4123-b5dc-4037dc8c48ce (test-3) is still in progress 2025-04-05 13:05:20.555260 | orchestrator | 2025-04-05 13:05:20 | INFO  | Live migration of 5e4fa5b7-fccc-4123-b5dc-4037dc8c48ce (test-3) is still in progress 2025-04-05 13:05:23.053973 | orchestrator | 2025-04-05 13:05:23 | INFO  | Live migrating server b9fefd74-1db8-4d09-b18f-ee1cbb4e5473 2025-04-05 13:05:28.294458 | orchestrator | 2025-04-05 13:05:28 | INFO  | Live migration of b9fefd74-1db8-4d09-b18f-ee1cbb4e5473 (test-2) is still in progress 2025-04-05 13:05:30.575367 | orchestrator | 2025-04-05 13:05:30 | INFO  | Live migration of b9fefd74-1db8-4d09-b18f-ee1cbb4e5473 (test-2) is still in progress 2025-04-05 13:05:32.918307 | orchestrator | 2025-04-05 13:05:32 | INFO  | Live migrating server 33fc9d7f-d353-49d1-8762-f612f329d00b 2025-04-05 13:05:37.408091 | orchestrator | 2025-04-05 13:05:37 | INFO  | Live migration of 33fc9d7f-d353-49d1-8762-f612f329d00b (test-1) is still in progress 2025-04-05 13:05:39.627317 | orchestrator | 2025-04-05 13:05:39 | INFO  | Live migration of 33fc9d7f-d353-49d1-8762-f612f329d00b (test-1) is still in progress 2025-04-05 13:05:41.865868 | orchestrator | 2025-04-05 13:05:41 | INFO  | Live migrating server 342042fe-ea3c-4465-bb95-d21f91d37bda 2025-04-05 13:05:46.082331 | orchestrator | 2025-04-05 13:05:46 | INFO  | Live migration of 342042fe-ea3c-4465-bb95-d21f91d37bda (test) is still in progress 2025-04-05 13:05:48.316719 | orchestrator | 2025-04-05 13:05:48 | INFO  | Live migration of 342042fe-ea3c-4465-bb95-d21f91d37bda (test) is still in progress 2025-04-05 13:05:50.588765 | orchestrator | 2025-04-05 13:05:50 | INFO  | Live migration of 342042fe-ea3c-4465-bb95-d21f91d37bda (test) is still in progress 2025-04-05 13:05:53.180199 | orchestrator | + compute_list 2025-04-05 13:05:55.568085 | orchestrator | + osism manage compute list testbed-node-3 2025-04-05 13:05:55.568918 | orchestrator | +------+--------+----------+ 2025-04-05 13:05:55.784171 | orchestrator | | ID | Name | Status | 2025-04-05 13:05:55.784270 | orchestrator | |------+--------+----------| 2025-04-05 13:05:55.784290 | orchestrator | +------+--------+----------+ 2025-04-05 13:05:55.784322 | orchestrator | + osism manage compute list testbed-node-4 2025-04-05 13:05:58.025687 | orchestrator | +------+--------+----------+ 2025-04-05 13:05:58.250222 | orchestrator | | ID | Name | Status | 2025-04-05 13:05:58.250311 | orchestrator | |------+--------+----------| 2025-04-05 13:05:58.250328 | orchestrator | +------+--------+----------+ 2025-04-05 13:05:58.250358 | orchestrator | + osism manage compute list testbed-node-5 2025-04-05 13:06:01.035790 | orchestrator | +--------------------------------------+--------+----------+ 2025-04-05 13:06:01.251292 | orchestrator | | ID | Name | Status | 2025-04-05 13:06:01.251383 | orchestrator | |--------------------------------------+--------+----------| 2025-04-05 13:06:01.251400 | orchestrator | | 76a0a4bf-1d15-4f3d-9ab9-8d46fc609d83 | test-4 | ACTIVE | 2025-04-05 13:06:01.251415 | orchestrator | | 5e4fa5b7-fccc-4123-b5dc-4037dc8c48ce | test-3 | ACTIVE | 2025-04-05 13:06:01.251429 | orchestrator | | b9fefd74-1db8-4d09-b18f-ee1cbb4e5473 | test-2 | ACTIVE | 2025-04-05 13:06:01.251443 | orchestrator | | 33fc9d7f-d353-49d1-8762-f612f329d00b | test-1 | ACTIVE | 2025-04-05 13:06:01.251457 | orchestrator | | 342042fe-ea3c-4465-bb95-d21f91d37bda | test | ACTIVE | 2025-04-05 13:06:01.251472 | orchestrator | +--------------------------------------+--------+----------+ 2025-04-05 13:06:01.251499 | orchestrator | + server_ping 2025-04-05 13:06:01.252124 | orchestrator | ++ tr -d '\r' 2025-04-05 13:06:01.252246 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-04-05 13:06:03.902199 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:06:03.914150 | orchestrator | + ping -c3 192.168.112.192 2025-04-05 13:06:03.914210 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-04-05 13:06:04.908430 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=8.98 ms 2025-04-05 13:06:04.908577 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=1.73 ms 2025-04-05 13:06:05.909651 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.18 ms 2025-04-05 13:06:05.910345 | orchestrator | 2025-04-05 13:06:05.910387 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-04-05 13:06:05.910407 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-04-05 13:06:05.910424 | orchestrator | rtt min/avg/max/mdev = 1.178/3.961/8.978/3.554 ms 2025-04-05 13:06:05.910450 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:06:05.919159 | orchestrator | + ping -c3 192.168.112.143 2025-04-05 13:06:05.919193 | orchestrator | PING 192.168.112.143 (192.168.112.143) 56(84) bytes of data. 2025-04-05 13:06:06.918383 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=1 ttl=63 time=4.27 ms 2025-04-05 13:06:06.918505 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=2 ttl=63 time=1.77 ms 2025-04-05 13:06:07.920152 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=3 ttl=63 time=1.31 ms 2025-04-05 13:06:07.920398 | orchestrator | 2025-04-05 13:06:07.920417 | orchestrator | --- 192.168.112.143 ping statistics --- 2025-04-05 13:06:07.920433 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-04-05 13:06:07.920448 | orchestrator | rtt min/avg/max/mdev = 1.307/2.450/4.272/1.302 ms 2025-04-05 13:06:07.920468 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:06:07.932144 | orchestrator | + ping -c3 192.168.112.106 2025-04-05 13:06:07.932214 | orchestrator | PING 192.168.112.106 (192.168.112.106) 56(84) bytes of data. 2025-04-05 13:06:08.929297 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=1 ttl=63 time=5.31 ms 2025-04-05 13:06:08.929425 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=2 ttl=63 time=1.84 ms 2025-04-05 13:06:09.929749 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=3 ttl=63 time=1.25 ms 2025-04-05 13:06:09.931786 | orchestrator | 2025-04-05 13:06:09.931850 | orchestrator | --- 192.168.112.106 ping statistics --- 2025-04-05 13:06:09.931868 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-04-05 13:06:09.931882 | orchestrator | rtt min/avg/max/mdev = 1.252/2.801/5.312/1.791 ms 2025-04-05 13:06:09.931905 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:06:09.940261 | orchestrator | + ping -c3 192.168.112.134 2025-04-05 13:06:09.940300 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2025-04-05 13:06:10.940506 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=4.61 ms 2025-04-05 13:06:10.940634 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=2.68 ms 2025-04-05 13:06:11.942337 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=2.13 ms 2025-04-05 13:06:11.943113 | orchestrator | 2025-04-05 13:06:11.943150 | orchestrator | --- 192.168.112.134 ping statistics --- 2025-04-05 13:06:11.943168 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-04-05 13:06:11.943183 | orchestrator | rtt min/avg/max/mdev = 2.126/3.140/4.611/1.064 ms 2025-04-05 13:06:11.943207 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-04-05 13:06:11.951309 | orchestrator | + ping -c3 192.168.112.154 2025-04-05 13:06:11.951352 | orchestrator | PING 192.168.112.154 (192.168.112.154) 56(84) bytes of data. 2025-04-05 13:06:12.950316 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=1 ttl=63 time=5.64 ms 2025-04-05 13:06:12.950446 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=2 ttl=63 time=2.37 ms 2025-04-05 13:06:13.951741 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=3 ttl=63 time=1.65 ms 2025-04-05 13:06:13.956145 | orchestrator | 2025-04-05 13:06:13.956206 | orchestrator | --- 192.168.112.154 ping statistics --- 2025-04-05 13:06:13.956223 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-04-05 13:06:13.956236 | orchestrator | rtt min/avg/max/mdev = 1.651/3.221/5.642/1.736 ms 2025-04-05 13:06:14.107718 | orchestrator | changed 2025-04-05 13:06:14.157370 | 2025-04-05 13:06:14.157476 | TASK [Run tempest] 2025-04-05 13:06:14.288542 | orchestrator | skipping: Conditional result was False 2025-04-05 13:06:14.302375 | 2025-04-05 13:06:14.302483 | TASK [Check prometheus alert status] 2025-04-05 13:06:14.436467 | orchestrator | skipping: Conditional result was False 2025-04-05 13:06:14.469550 | 2025-04-05 13:06:14.469627 | PLAY RECAP 2025-04-05 13:06:14.469679 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-04-05 13:06:14.469704 | 2025-04-05 13:06:14.699867 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-04-05 13:06:14.702695 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-04-05 13:06:15.326342 | 2025-04-05 13:06:15.326450 | PLAY [Post output play] 2025-04-05 13:06:15.349230 | 2025-04-05 13:06:15.349324 | LOOP [stage-output : Register sources] 2025-04-05 13:06:15.411499 | 2025-04-05 13:06:15.411671 | TASK [stage-output : Check sudo] 2025-04-05 13:06:16.050360 | orchestrator | sudo: a password is required 2025-04-05 13:06:16.456792 | orchestrator | ok: Runtime: 0:00:00.014185 2025-04-05 13:06:16.464992 | 2025-04-05 13:06:16.465081 | LOOP [stage-output : Set source and destination for files and folders] 2025-04-05 13:06:16.506997 | 2025-04-05 13:06:16.507165 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-04-05 13:06:16.589880 | orchestrator | ok 2025-04-05 13:06:16.597076 | 2025-04-05 13:06:16.597163 | LOOP [stage-output : Ensure target folders exist] 2025-04-05 13:06:17.005347 | orchestrator | ok: "docs" 2025-04-05 13:06:17.005721 | 2025-04-05 13:06:17.211894 | orchestrator | ok: "artifacts" 2025-04-05 13:06:17.397893 | orchestrator | ok: "logs" 2025-04-05 13:06:17.415300 | 2025-04-05 13:06:17.415413 | LOOP [stage-output : Copy files and folders to staging folder] 2025-04-05 13:06:17.447730 | 2025-04-05 13:06:17.447904 | TASK [stage-output : Make all log files readable] 2025-04-05 13:06:17.693707 | orchestrator | ok 2025-04-05 13:06:17.704698 | 2025-04-05 13:06:17.704830 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-04-05 13:06:17.750194 | orchestrator | skipping: Conditional result was False 2025-04-05 13:06:17.767989 | 2025-04-05 13:06:17.768106 | TASK [stage-output : Discover log files for compression] 2025-04-05 13:06:17.793086 | orchestrator | skipping: Conditional result was False 2025-04-05 13:06:17.813806 | 2025-04-05 13:06:17.813922 | LOOP [stage-output : Archive everything from logs] 2025-04-05 13:06:17.895294 | 2025-04-05 13:06:17.895425 | PLAY [Post cleanup play] 2025-04-05 13:06:17.927488 | 2025-04-05 13:06:17.927567 | TASK [Set cloud fact (Zuul deployment)] 2025-04-05 13:06:17.980153 | orchestrator | ok 2025-04-05 13:06:17.988192 | 2025-04-05 13:06:17.988278 | TASK [Set cloud fact (local deployment)] 2025-04-05 13:06:18.012429 | orchestrator | skipping: Conditional result was False 2025-04-05 13:06:18.024680 | 2025-04-05 13:06:18.024763 | TASK [Clean the cloud environment] 2025-04-05 13:06:18.570589 | orchestrator | 2025-04-05 13:06:18 - clean up servers 2025-04-05 13:06:19.371355 | orchestrator | 2025-04-05 13:06:19 - testbed-manager 2025-04-05 13:06:19.455270 | orchestrator | 2025-04-05 13:06:19 - testbed-node-1 2025-04-05 13:06:19.553034 | orchestrator | 2025-04-05 13:06:19 - testbed-node-2 2025-04-05 13:06:19.650691 | orchestrator | 2025-04-05 13:06:19 - testbed-node-5 2025-04-05 13:06:19.750377 | orchestrator | 2025-04-05 13:06:19 - testbed-node-3 2025-04-05 13:06:19.837178 | orchestrator | 2025-04-05 13:06:19 - testbed-node-4 2025-04-05 13:06:19.949304 | orchestrator | 2025-04-05 13:06:19 - testbed-node-0 2025-04-05 13:06:20.040554 | orchestrator | 2025-04-05 13:06:20 - clean up keypairs 2025-04-05 13:06:20.060191 | orchestrator | 2025-04-05 13:06:20 - testbed 2025-04-05 13:06:20.086494 | orchestrator | 2025-04-05 13:06:20 - wait for servers to be gone 2025-04-05 13:06:35.647243 | orchestrator | 2025-04-05 13:06:35 - clean up ports 2025-04-05 13:06:35.847367 | orchestrator | 2025-04-05 13:06:35 - 05e67dad-c033-4120-aa9d-dba94bca6541 2025-04-05 13:06:36.133006 | orchestrator | 2025-04-05 13:06:36 - 3414a9fc-7f08-4ccc-a1f4-d4e08021d64f 2025-04-05 13:06:36.327585 | orchestrator | 2025-04-05 13:06:36 - 6cfad184-91b6-40ff-98e5-3a363b1c63df 2025-04-05 13:06:36.555206 | orchestrator | 2025-04-05 13:06:36 - 8f24ffed-c54b-45fe-9a61-bbd36698c300 2025-04-05 13:06:36.740430 | orchestrator | 2025-04-05 13:06:36 - bf28ed61-4ad0-4fe7-a65d-196fce7d107f 2025-04-05 13:06:36.934710 | orchestrator | 2025-04-05 13:06:36 - f031c7f2-0166-4568-87f2-a5e2b1ac1181 2025-04-05 13:06:37.389777 | orchestrator | 2025-04-05 13:06:37 - f2bff80a-77f9-4739-8899-6b009859f217 2025-04-05 13:06:37.581747 | orchestrator | 2025-04-05 13:06:37 - clean up volumes 2025-04-05 13:06:37.722826 | orchestrator | 2025-04-05 13:06:37 - testbed-volume-5-node-base 2025-04-05 13:06:37.771029 | orchestrator | 2025-04-05 13:06:37 - testbed-volume-0-node-base 2025-04-05 13:06:37.808816 | orchestrator | 2025-04-05 13:06:37 - testbed-volume-4-node-base 2025-04-05 13:06:37.851660 | orchestrator | 2025-04-05 13:06:37 - testbed-volume-3-node-base 2025-04-05 13:06:37.898511 | orchestrator | 2025-04-05 13:06:37 - testbed-volume-2-node-base 2025-04-05 13:06:37.938057 | orchestrator | 2025-04-05 13:06:37 - testbed-volume-1-node-base 2025-04-05 13:06:37.979330 | orchestrator | 2025-04-05 13:06:37 - testbed-volume-manager-base 2025-04-05 13:06:38.019546 | orchestrator | 2025-04-05 13:06:38 - testbed-volume-13-node-1 2025-04-05 13:06:38.059981 | orchestrator | 2025-04-05 13:06:38 - testbed-volume-17-node-5 2025-04-05 13:06:38.104947 | orchestrator | 2025-04-05 13:06:38 - testbed-volume-8-node-2 2025-04-05 13:06:38.143503 | orchestrator | 2025-04-05 13:06:38 - testbed-volume-7-node-1 2025-04-05 13:06:38.181008 | orchestrator | 2025-04-05 13:06:38 - testbed-volume-11-node-5 2025-04-05 13:06:38.222784 | orchestrator | 2025-04-05 13:06:38 - testbed-volume-5-node-5 2025-04-05 13:06:38.264298 | orchestrator | 2025-04-05 13:06:38 - testbed-volume-12-node-0 2025-04-05 13:06:38.301646 | orchestrator | 2025-04-05 13:06:38 - testbed-volume-6-node-0 2025-04-05 13:06:38.357263 | orchestrator | 2025-04-05 13:06:38 - testbed-volume-1-node-1 2025-04-05 13:06:38.460118 | orchestrator | 2025-04-05 13:06:38 - testbed-volume-15-node-3 2025-04-05 13:06:38.526155 | orchestrator | 2025-04-05 13:06:38 - testbed-volume-2-node-2 2025-04-05 13:06:38.566353 | orchestrator | 2025-04-05 13:06:38 - testbed-volume-14-node-2 2025-04-05 13:06:38.607753 | orchestrator | 2025-04-05 13:06:38 - testbed-volume-16-node-4 2025-04-05 13:06:38.649506 | orchestrator | 2025-04-05 13:06:38 - testbed-volume-4-node-4 2025-04-05 13:06:38.690986 | orchestrator | 2025-04-05 13:06:38 - testbed-volume-10-node-4 2025-04-05 13:06:38.731041 | orchestrator | 2025-04-05 13:06:38 - testbed-volume-9-node-3 2025-04-05 13:06:38.770432 | orchestrator | 2025-04-05 13:06:38 - testbed-volume-0-node-0 2025-04-05 13:06:38.815261 | orchestrator | 2025-04-05 13:06:38 - testbed-volume-3-node-3 2025-04-05 13:06:38.853281 | orchestrator | 2025-04-05 13:06:38 - disconnect routers 2025-04-05 13:06:38.968074 | orchestrator | 2025-04-05 13:06:38 - testbed 2025-04-05 13:06:40.526464 | orchestrator | 2025-04-05 13:06:40 - clean up subnets 2025-04-05 13:06:40.566833 | orchestrator | 2025-04-05 13:06:40 - subnet-testbed-management 2025-04-05 13:06:40.696267 | orchestrator | 2025-04-05 13:06:40 - clean up networks 2025-04-05 13:06:40.870589 | orchestrator | 2025-04-05 13:06:40 - net-testbed-management 2025-04-05 13:06:41.150626 | orchestrator | 2025-04-05 13:06:41 - clean up security groups 2025-04-05 13:06:41.182677 | orchestrator | 2025-04-05 13:06:41 - testbed-node 2025-04-05 13:06:41.260257 | orchestrator | 2025-04-05 13:06:41 - testbed-management 2025-04-05 13:06:41.355369 | orchestrator | 2025-04-05 13:06:41 - clean up floating ips 2025-04-05 13:06:41.385135 | orchestrator | 2025-04-05 13:06:41 - 81.163.192.14 2025-04-05 13:06:41.757403 | orchestrator | 2025-04-05 13:06:41 - clean up routers 2025-04-05 13:06:41.839869 | orchestrator | 2025-04-05 13:06:41 - testbed 2025-04-05 13:06:42.581424 | orchestrator | changed 2025-04-05 13:06:42.626024 | 2025-04-05 13:06:42.626127 | PLAY RECAP 2025-04-05 13:06:42.626187 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-04-05 13:06:42.626214 | 2025-04-05 13:06:42.752938 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-04-05 13:06:42.756289 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-04-05 13:06:43.490304 | 2025-04-05 13:06:43.490464 | PLAY [Base post-fetch] 2025-04-05 13:06:43.520424 | 2025-04-05 13:06:43.520560 | TASK [fetch-output : Set log path for multiple nodes] 2025-04-05 13:06:43.607714 | orchestrator | skipping: Conditional result was False 2025-04-05 13:06:43.619122 | 2025-04-05 13:06:43.619285 | TASK [fetch-output : Set log path for single node] 2025-04-05 13:06:43.672929 | orchestrator | ok 2025-04-05 13:06:43.680508 | 2025-04-05 13:06:43.680622 | LOOP [fetch-output : Ensure local output dirs] 2025-04-05 13:06:44.166765 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/acd7f2aa96a14e52945307d1493fa367/work/logs" 2025-04-05 13:06:44.479514 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/acd7f2aa96a14e52945307d1493fa367/work/artifacts" 2025-04-05 13:06:44.732748 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/acd7f2aa96a14e52945307d1493fa367/work/docs" 2025-04-05 13:06:44.747496 | 2025-04-05 13:06:44.747620 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-04-05 13:06:45.572878 | orchestrator | changed: .d..t...... ./ 2025-04-05 13:06:45.573225 | orchestrator | changed: All items complete 2025-04-05 13:06:45.573272 | 2025-04-05 13:06:46.145797 | orchestrator | changed: .d..t...... ./ 2025-04-05 13:06:46.752242 | orchestrator | changed: .d..t...... ./ 2025-04-05 13:06:46.781690 | 2025-04-05 13:06:46.781966 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-04-05 13:06:46.830613 | orchestrator | skipping: Conditional result was False 2025-04-05 13:06:46.837692 | orchestrator | skipping: Conditional result was False 2025-04-05 13:06:46.902124 | 2025-04-05 13:06:46.902255 | PLAY RECAP 2025-04-05 13:06:46.902317 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-04-05 13:06:46.902348 | 2025-04-05 13:06:47.030874 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-04-05 13:06:47.034068 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-04-05 13:06:47.729651 | 2025-04-05 13:06:47.729831 | PLAY [Base post] 2025-04-05 13:06:47.758809 | 2025-04-05 13:06:47.758960 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-04-05 13:06:48.534554 | orchestrator | changed 2025-04-05 13:06:48.575287 | 2025-04-05 13:06:48.575421 | PLAY RECAP 2025-04-05 13:06:48.575510 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-04-05 13:06:48.575581 | 2025-04-05 13:06:48.704492 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-04-05 13:06:48.710535 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-04-05 13:06:49.496183 | 2025-04-05 13:06:49.496340 | PLAY [Base post-logs] 2025-04-05 13:06:49.513152 | 2025-04-05 13:06:49.513284 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-04-05 13:06:49.990471 | localhost | changed 2025-04-05 13:06:49.995568 | 2025-04-05 13:06:49.995699 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-04-05 13:06:50.027393 | localhost | ok 2025-04-05 13:06:50.035220 | 2025-04-05 13:06:50.035384 | TASK [Set zuul-log-path fact] 2025-04-05 13:06:50.054874 | localhost | ok 2025-04-05 13:06:50.067045 | 2025-04-05 13:06:50.067174 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-04-05 13:06:50.096297 | localhost | ok 2025-04-05 13:06:50.102809 | 2025-04-05 13:06:50.102924 | TASK [upload-logs : Create log directories] 2025-04-05 13:06:50.603691 | localhost | changed 2025-04-05 13:06:50.608956 | 2025-04-05 13:06:50.609078 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-04-05 13:06:51.167513 | localhost -> localhost | ok: Runtime: 0:00:00.017715 2025-04-05 13:06:51.175872 | 2025-04-05 13:06:51.176005 | TASK [upload-logs : Upload logs to log server] 2025-04-05 13:06:51.773120 | localhost | Output suppressed because no_log was given 2025-04-05 13:06:51.778893 | 2025-04-05 13:06:51.779057 | LOOP [upload-logs : Compress console log and json output] 2025-04-05 13:06:51.857111 | localhost | skipping: Conditional result was False 2025-04-05 13:06:51.874543 | localhost | skipping: Conditional result was False 2025-04-05 13:06:51.888583 | 2025-04-05 13:06:51.888741 | LOOP [upload-logs : Upload compressed console log and json output] 2025-04-05 13:06:51.966325 | localhost | skipping: Conditional result was False 2025-04-05 13:06:51.966716 | 2025-04-05 13:06:51.979597 | localhost | skipping: Conditional result was False 2025-04-05 13:06:51.992177 | 2025-04-05 13:06:51.992336 | LOOP [upload-logs : Upload console log and json output]